This project is read-only.

Is anyone using this in a real production env.

Feb 8, 2008 at 3:16 PM
Has this been used in a real production environment? I wondering if anyone has any experience/problems with using this in a high volume website?
Feb 8, 2008 at 6:16 PM
Edited Feb 10, 2008 at 8:40 PM
As far as memcahced is concerned its used by Facebook, Digg, Wikipedia and MySpace. Here is a presentation that can provide more detail

As far as these providers are concerned, they are written for a product that we will be taking online in Jan of 2009.
Feb 8, 2008 at 6:49 PM
Do you feel the providers are "production" ready?
Feb 8, 2008 at 8:20 PM
I feel pretty good about them. No issues have been reported as of yet. We have been using Cache Provider for over 4 month in our development and didn't have any issues with it. Session provider we have just started using it.
Feb 16, 2008 at 1:25 PM
I am planning to use this in an SaaS e-commerce project I'm working on right now. I'll do some benchmark soon and we'll see how it performs.
Feb 18, 2008 at 3:32 PM
That would be great
Mar 3, 2008 at 6:04 PM
We're also planning to use this in an SaaS project. To be honest, some initial load testing has shown that there are limitations that we must work with. The obvious being the 1mb limit on each object. This means very large lists cannot be simply cached directly but need some kind of pointer or chunking strategy put in. Limit is 32,000 items in a list for Guids or 128,0000 for seven digit integers. It would only seem that either strategy would degrade the performance but I still have a lot to learn about caching. Currently I'm testing on what the overall limitation of the size of cache would be. While we are only implementing some of a very large database into this platform shortly, the plan is to eventually have a working copy of the db, of sorts, within the cache. Using publish and subscribe pattern. We will have memcached in production for some modules shortly but we may have to move to a commercial alternative if we cannot remove the limitations we've found. Note the limitations may be due to our approach and not memcached. Though my hunch is that it is the win32 client is where the obstacle is going to be. Don't see any signs of updates and they explicitly say not to use in production. Another limitation, somewhat related, is that win32 does not allow mult-gets.
Mar 4, 2008 at 4:34 PM
"Limit is 32,000 items in a list for Guids or 128,0000 for seven digit integers"
How many instances of memcached you were running with how much memory. I don't think so there is a limit as far as how many objects can be stored. That depends on how much memory memcached has access to. There is limit of 1 MB of max object in the default memcached binay. Please refer to following post on how to remove this limitation.

Well if you are willing to help we can actually get the latest version of memached ported to win32 and can add dependancy support
for SQL Server and File etc. libevent library used in memcached actually has a win32 version. This will help my project and yours.
Mar 4, 2008 at 6:53 PM
Yes, i can increase the max limit (easy from command line but not sure how to set when it's running as a service, tried installing it with -m2048 but have only managed to get memory bigger when run from commandline) and put in as many objects as memory I have. The limit i had referred to having a list as an object, i.e. add one object to cache that stores a list of object pointers effectively. One thing I'm doing is making sure we work with smaller sematanic lists and find that having lists greater than 30k not such a great idea anyway.

I've seen that you can reconfigure the size of slabs but read recommendations against this and realise that we need a limit somewhere or our overall size will be restricted. After some experimentation i reckon 1mb is a fairly decent sized slab size. Memcached seems very good at allocating memory anyway. I have some metrics on storing big amounts of different sized objects in the cache if you're interested.

I would happily help where i can. I'd have to put my hand up and say it's been over ten years since i've written any c++ and it wasn't great then! (i'm assuming win32 client is in c). However, i'm alright with c# and could help there maybe. Have you taken notice of beitmemcached? Would it be wise to consider having an option to use their client libraries as well? Guess that's up to enyim and them to see if they can join forces. There does seem to be a fast growing interest in memcached for windows which is great. Again, ask me if there's anything i could help you with.
Mar 4, 2008 at 9:55 PM
"I have some metrics on storing big amounts of different sized objects in the cache if you're interested." I am ineterested in it. Please share.
I can create a new project in for win32 memcached port and try to get some help.
Mar 6, 2008 at 10:48 PM
Memcached 1.2.4 for Win32 beta is at Please download. Thanks
Mar 7, 2008 at 7:36 AM
thanks, i'd actually just saw that yesterday. i'm on that memcached mailing list too and saw your request but am still pending authorisation on post to it. today, i'll be doing a quick speed comparison to see if there's much of a difference between memcached.clientlibrary and enyim. Don't expect there to be much of a difference but wanted to see. I'll come back with those results and those mentioned above. I've sent an email to Kenneth to ask him about setting a project up on codeplex, or googlecode.
Mar 7, 2008 at 8:52 AM
Metrics for win32 memcached.
Machine: XP, 3gb RAM, CPU never affected, 2gb for memcached. (note that the nunit asserting will add considerably to the performance times so these are to only used relatively and not absolutely, the cache sized used would be accurate)
The three tests detailed here were ran using both the old memcached provider and the new one. There was scarcely a difference in the times between the two so I'll just give the results as if for both.
1. MassiveNumberOfObjectsIntoCacheToTestOverallLimitTest{ 10,000 objects of 5k in size into cache, took 10 seconds and used 70mb. Projection we'd need 7.5gb for 1 million such objects in cache)
2. Similar test with tiny objects {10,000 objects of 8 char, took 8 seconds and used 16mb. would need 1.5gb for 1 million)
3. ditto with massive objects {just under 1mb limit, used 1mb for each or a little over. 44 seconds to load. would need 1tb for 1 million of these objects}

Some lessons learnt: don't rely on massive objects; loading of big cache could take some time; and there would seem to be a sweet spot somewhere in regards to size of objects, that best utilises size of cache and how quick it is. These all really tested writing to the cache, so I put a reading element into all these tests (one read for every write) and still no difference between providers. I don't pay much head to the reading times as i'd need better tests that wouldn't have so many asserts involved (reading and asserting added 16 seconds to combined tests above which brought total time to 78 seconds).