Current location - Loan Platform Complete Network - Big data management - Difference between redis and memcached
Difference between redis and memcached
If you simply compare the difference between Redis and Memcached, most of you will get the following points:

1 Redis does not only support simple k/v type of data, but also provides list, set, hash, and other data structures for storage.

2 Redis supports data backups, i.e., master-slave mode data backups.

3 Redis supports data persistence, which allows you to keep in-memory data on disk and load it again for use on reboot.

In Redis, not all data is stored in memory all the time. This is one of the biggest differences compared to Memcached (in my personal opinion).

Redis only caches information about all the keys. If Redis finds that memory usage exceeds a certain threshold, it triggers a swap operation, and Redis calculates which keys correspond to which based on "swappability = age*log(size_in_memory)". Redis calculates which keys correspond to values that need to be swapped to disk based on "swappability = age*log(size_in_memory)". Redis then persists these values to disk and clears them in memory. This feature allows Redis to hold more data than the size of the machine's own memory. Of course, the machine's own memory must be able to hold all the keys; after all, this data is not subject to swap operations.

Also, because when Redis swaps data in memory to disk, the main thread that provides the service and the child thread that performs the swap operation **** enjoy this memory, so if you update the data that needs to be swapped, Redis will block this operation until the child thread completes the swap operation before you can make changes.

See before and after using Redis' unique memory model:

VM off: 300k keys, 4096 bytes values: 1.3G used

VM on: 300k keys, 4096 bytes values: 73M used

VM off: 1 million keys, 256 bytes values: 430.12M used

VM on: 1 million keys, 256 bytes values: 160.09M used

VM on: 1 million keys, values as large as you want, still: 160.09M used

When reading data from Redis, if the value corresponding to the key read is not in memory, then Redis needs to load the corresponding data from the swap file and return it to the requestor. There is an I/O thread pooling problem here. By default, Redis blocks, i.e., it completes loading all swap files before corresponding. This strategy is more appropriate when the number of clients is small and batch operations are performed. But if Redis is used in a large web application, this is obviously not able to meet the situation of large concurrency. So Redis runs us to set the size of the I/O thread pool to perform concurrent operations on read requests that need to load the appropriate data from the swap file, reducing the blocking time.

redis, memcache, mongoDB Comparison

From the following dimensions, redis, memcache, mongoDB to do a comparison, welcome to shoot

1, performance

are relatively high, the performance of us should not be a bottleneck

Overall, the TPS redis and memcache are more or less the same, and greater than mongodb

2, the convenience of operation

memcache data structure is a single

redis some rich, data operations, redis better, less network IO times

mongodb supports rich data expression, indexing, most similar to relational databases, support for query language is very rich

3, the size of the memory space and the size of the data volume

redis in version 2.0 after the addition of their own VM features to break through the limitations of the physical memory; you can set the time of expiration of the key value (similar to memcache)

memcache can modify the maximum available memory, using the LRU algorithm

mongoDB is suitable for large amounts of data storage, dependent on the operating system VM to do memory management, eat memory is also more powerful, the service should not be together with other services

4, availability (single-point problem)

For the single-point problem,

redis. rely on the client to achieve distributed read and write; master-slave replication, each time the slave node reconnected to the master node to rely on the entire snapshot, no incremental replication, due to performance and efficiency issues,

So the single-point problem is more complex; does not support automatic sharding, you need to rely on the program to set a consistent hash mechanism.

An alternative is not to use redis itself replication mechanism, using their own active replication (multiple copies of the store), or change to incremental replication (need to realize), consistency issues and performance trade-offs

Memcache itself does not have data redundancy mechanism, there is no need; for failure prevention, rely on the proven hash or ring algorithms to solve the jitter problem caused by a single point of failure.

mongoDB supports master-slave, replicaset (internal paxos election algorithm, automatic failure recovery), auto sharding mechanism, the client blocked the failover and cutover mechanism.

5, reliability (persistence)

For data persistence and data recovery,

redis support (snapshots, AOF): rely on snapshots for persistence, AOF enhances reliability at the same time, there is an impact on the performance

memcache is not supported, usually used to do the cache to improve performance;

MongoDB from 1.8 to 1.5 years ago, the first time the data is stored in a cache, it is used to cache the data. MongoDB from version 1.8 began to use binlog way to support the reliability of persistence

6, data consistency (transaction support)

Memcache in the concurrency scenario, with cas to ensure consistency

redis transaction support is relatively weak, can only guarantee that each operation in the transaction is performed continuously

mongoDB does not support transactions

7, data analysis

mongoDB has a built-in data analysis function (mapreduce), the other does not support

8, application scenarios

redis: a smaller amount of data on the more performance operations and calculations

memcache: used to reduce the database load and improve the performance of dynamic systems. in the dynamic system to reduce the database load, improve performance; do cache, improve performance (suitable for reading more write less, for a larger amount of data, you can use sharding)

MongoDB: mainly to solve the problem of access efficiency of massive data