1 00:00:02,159 --> 00:00:08,410 welcome back to back space Academy in this lesson I'm going to go through the 2 00:00:08,410 --> 00:00:15,129 elasticache service. We do also have two labs on elasticache and they do go into 3 00:00:15,129 --> 00:00:18,580 quite significant amount of detail on how to use elasticache at the 4 00:00:18,580 --> 00:00:22,810 management console and the command-line interface and we have another lab there 5 00:00:22,810 --> 00:00:27,640 for developers who would like to know how to program the ElastiCache 6 00:00:27,640 --> 00:00:35,410 Redis engine using the AWS JavaScript software development kit. So this lesson 7 00:00:35,410 --> 00:00:40,870 will be preparing you for those labs. So first of all we'll go through exactly 8 00:00:40,870 --> 00:00:45,430 what is elasticache and the two different options that we have for a 9 00:00:45,430 --> 00:00:49,330 caching engine being a memcached and Redis and then we'll look at the ways in 10 00:00:49,330 --> 00:00:56,250 which we can enable caching on for example a data source or a data base 11 00:00:57,000 --> 00:01:03,659 ElastiCache is a managed in-memory cache service it is a key value store and 12 00:01:03,659 --> 00:01:10,359 provides ultra-fast sub-millisecond latency access to your cache data. There 13 00:01:10,359 --> 00:01:16,719 are two options available for that data store one being elasticache Redis and 14 00:01:16,719 --> 00:01:23,859 the other being memcached. It is primarily used, but not always used, to 15 00:01:23,859 --> 00:01:29,049 reduce load on a database. So in a similar way so you would use Cloud 16 00:01:29,049 --> 00:01:35,469 front to front and cache an Amazon s3 bucket or a 17 00:01:35,469 --> 00:01:40,929 website on an Amazon s3 bucket, in the same way that you would use cloud front 18 00:01:40,929 --> 00:01:45,579 in that situation you would use elasticache as a cache in front of your 19 00:01:45,579 --> 00:01:50,319 database and that would reduce the load on your database it would reduce the 20 00:01:50,319 --> 00:01:57,459 latency on any request to that database. It provides multi AZ capability and you 21 00:01:57,459 --> 00:02:02,950 would use it in situations where there is a high request rate required. So if 22 00:02:02,950 --> 00:02:07,689 you need very fast access to your data and there is a high request rate for 23 00:02:07,689 --> 00:02:14,620 that data, you had a lot of users and the volume of data is quite low but it's 24 00:02:14,620 --> 00:02:18,390 regularly accessed and you require a low latency to that 25 00:02:18,390 --> 00:02:22,290 data. The first option available to us is the 26 00:02:22,290 --> 00:02:26,820 memcached engine. So memcached, it's a free and open source project and 27 00:02:26,820 --> 00:02:32,250 provides high performance distributed memory object caching system it only 28 00:02:32,250 --> 00:02:37,530 supports simple data types and the data structure is a string or an object up to 29 00:02:37,530 --> 00:02:44,520 one megabyte the data volume maximum you can have in a memcache cache is 4.7 TB 30 00:02:44,520 --> 00:02:51,690 bytes and there is no persistence of data, if data is lost inside a cache it 31 00:02:51,690 --> 00:02:57,030 cannot be recovered, but that is not such a bad thing because this is a caching 32 00:02:57,030 --> 00:03:02,370 engine and you would normally use it in front of a database, so if you lost the 33 00:03:02,370 --> 00:03:06,990 data in your cache you can always retrieve that data back from your 34 00:03:06,990 --> 00:03:13,620 original data source being at the data base behind that cache. It provides 35 00:03:13,620 --> 00:03:18,600 simple scaling by adding or removing nodes and the data is distributed across 36 00:03:18,600 --> 00:03:22,590 those nodes it is suitable for simple data 37 00:03:22,590 --> 00:03:29,670 structures for large nodes with multiple cores or threads and in situations where 38 00:03:29,670 --> 00:03:34,140 you require elasticity, where you require a scaling in and out by adding and 39 00:03:34,140 --> 00:03:39,870 removing nodes according to demand and you require partitioning of data across 40 00:03:39,870 --> 00:03:47,330 multiple shards and it's very suitable for caching data bases. 41 00:03:47,989 --> 00:03:52,380 The next option available to us is redis. Redis is a little bit 42 00:03:52,380 --> 00:03:56,100 more of an advanced caching engine it's again it's a free and open source 43 00:03:56,100 --> 00:04:01,410 project and it's a in-memory data structure store similar to memcache but 44 00:04:01,410 --> 00:04:06,720 more advanced. It supports far more advanced data structures strings hashes 45 00:04:06,720 --> 00:04:13,830 lists sets sorted sets with range queries bitmaps hyper logs and it also 46 00:04:13,830 --> 00:04:19,859 supports geospatial indexes with radius queries which is great if you've got a 47 00:04:19,859 --> 00:04:25,020 mobile application for example that needs geospatial information it needs a 48 00:04:25,020 --> 00:04:28,500 location information and you just search an information really good you can do 49 00:04:28,500 --> 00:04:33,720 that in very low latency the key/value size can 50 00:04:33,720 --> 00:04:39,630 be up to 512 megabytes so much more than the memcache one megabyte and the 51 00:04:39,630 --> 00:04:44,330 maximum data volume is slightly lesser memcached at 3.5 gigabyte 52 00:04:44,330 --> 00:04:51,120 the data is persistent so lost data can be recovered and we also have read 53 00:04:51,120 --> 00:04:57,720 replicas for elasticache Redis so this is good if you want to use Redis on its 54 00:04:57,720 --> 00:05:03,140 own as a standalone a standalone key value store without a database behind it 55 00:05:03,140 --> 00:05:08,460 it has a very large command set available so that combined with the 56 00:05:08,460 --> 00:05:13,010 advanced data structures that are available you can have very complex 57 00:05:13,010 --> 00:05:19,050 queries of that data and enables you to have very good very advanced 58 00:05:19,050 --> 00:05:25,440 applications using the elasticache Redis engine, it supports notification or 59 00:05:25,440 --> 00:05:31,170 notifications from the Redis pub/sub channel. So you can set up a pub sub 60 00:05:31,170 --> 00:05:36,360 channel and users can subscribe to that channel and receive notifications 61 00:05:36,360 --> 00:05:44,340 through the Redis engine, for changes to that to that cache so you would use it 62 00:05:44,340 --> 00:05:49,470 over memcached obviously where you require that advanced data type where 63 00:05:49,470 --> 00:05:54,210 you will require auto sorting of data where you would require pub sub 64 00:05:54,210 --> 00:05:59,640 capabilities where you require high availability and failover you want 65 00:05:59,640 --> 00:06:04,980 persistent data so that is where you would use Redis. it's a more 66 00:06:04,980 --> 00:06:10,620 advanced version of memcached but memcached is still very good if you're 67 00:06:10,620 --> 00:06:15,300 if you don't require all the bells bells and whistles of Redis and certainly if 68 00:06:15,300 --> 00:06:18,570 you have got a no SQL database for example and you want to put a key value 69 00:06:18,570 --> 00:06:24,210 store in front of it and memcached is great for doing that. In order for our 70 00:06:24,210 --> 00:06:30,000 caching engine to be useful it needs to be updated on a regular basis or updated 71 00:06:30,000 --> 00:06:36,870 on an event-driven basis, so when data changes we need to have some way for 72 00:06:36,870 --> 00:06:42,480 example and we need to have a data base trigger to update that data. So dynamodb 73 00:06:42,480 --> 00:06:45,419 has database triggers and mysql there's a 74 00:06:45,419 --> 00:06:50,419 mysql.lambda procedure that can trigger a lambda function. 75 00:06:50,419 --> 00:06:55,249 MongoDB also has built-in triggers for its databases 76 00:06:55,249 --> 00:07:00,810 so if our database supports triggers we can use that to update elasticache 77 00:07:00,810 --> 00:07:07,770 using a Lambda function or using a separate ec2 instance to copy that 78 00:07:07,770 --> 00:07:13,379 that data over to our elasticache engine. We could also use our application 79 00:07:13,379 --> 00:07:22,319 itself to update elasticache so when our application has a read request to to 80 00:07:22,319 --> 00:07:28,409 the database it can also update elasticache at the same time., There are 81 00:07:28,409 --> 00:07:33,210 a couple of caching strategies that we can look at for loading our data 82 00:07:33,210 --> 00:07:38,550 initially into our elasticache and also for updating that data when it changes. 83 00:07:38,550 --> 00:07:43,050 The first one being lazy loading and that will load data into the cache on a 84 00:07:43,050 --> 00:07:49,669 cache miss so what that means is that if a request comes in to elasticache and 85 00:07:49,669 --> 00:07:57,120 that data is not available, then our application will go back to the data 86 00:07:57,120 --> 00:08:01,830 source, it will retrieve that data load it into elasticache and then also at the 87 00:08:01,830 --> 00:08:09,210 same time return that data back to the requester. It will require a time to live 88 00:08:09,210 --> 00:08:15,240 to be put on that data otherwise the data will grow and it will become not 89 00:08:15,240 --> 00:08:20,669 manageable so when data is not accessed regularly we need a TTL on that to 90 00:08:20,669 --> 00:08:25,439 remove it from the case because a case is only there for accessing regularly 91 00:08:25,439 --> 00:08:29,969 accessed data. The downside of lazy loading is that it 92 00:08:29,969 --> 00:08:36,000 requires three trips on occasion if so we need to we need to first of all go 93 00:08:36,000 --> 00:08:41,490 back to the data source if it's if the data is not in our case then we need to 94 00:08:41,490 --> 00:08:45,899 retrieve that data put it into elasticache and then return it back to 95 00:08:45,899 --> 00:08:51,209 requester. The other strategy there is write through and that is where our 96 00:08:51,209 --> 00:08:56,010 application or if we're using database triggers, a trigger event, will update the 97 00:08:56,010 --> 00:08:59,310 cache when data is written to the database or 98 00:08:59,310 --> 00:09:07,620 the database is updated so this again requires a TTL for to make sure that 99 00:09:07,620 --> 00:09:13,950 style data doesn't build up the downside of right through is that it caches in 100 00:09:13,950 --> 00:09:21,060 frequently accessed data so unlike lazy loading that will cache it only on on 101 00:09:21,060 --> 00:09:25,740 occasion is this case is that whether or not it has been requested 102 00:09:25,740 --> 00:09:33,690 so that does require more more resources volume of data to be stored in the cache 103 00:09:33,690 --> 00:09:39,900 and so it is definitely does require a TTI to get rid of any style data more so 104 00:09:39,900 --> 00:09:45,060 than in lazy loading adding a TTL so memcache to and Redis both have the 105 00:09:45,060 --> 00:09:51,630 option there to through a set command to specify an expire parameter in memcache 106 00:09:51,630 --> 00:09:57,840 its in seconds and in Redis it is in either seconds or milliseconds so in 107 00:09:57,840 --> 00:10:02,370 there set command we have an ex parameter which is the expire timing 108 00:10:02,370 --> 00:10:07,800 seconds and the px which is expire time if in milliseconds. If you wanted to 109 00:10:07,800 --> 00:10:14,400 expire very quickly so when that expire time is up then it is that data that key 110 00:10:14,400 --> 00:10:21,210 value is removed from the cache. So that brings us to the end of our lecture on 111 00:10:21,210 --> 00:10:26,520 elasticache. There are a couple of labs coming up the first one being using 112 00:10:26,520 --> 00:10:31,080 elasticache which is for solutions architects and sysops administrators and 113 00:10:31,080 --> 00:10:38,400 you will go through using the Amazon console and using that to create and 114 00:10:38,400 --> 00:10:43,830 elasticache Redis cache and at the same time accessing that through the Command 115 00:10:43,830 --> 00:10:51,180 Line. The other lab is for developers where we are using the software 116 00:10:51,180 --> 00:10:54,930 development kit the JavaScript software development kit to program elasticache 117 00:10:54,930 --> 00:11:02,330 Redis. So that brings us to the end and I'll see you in our next lecture