Home » Questions » Computers [ Ask a new question ]

When should I use /dev/shm/ and when should I use /tmp/?

When should I use /dev/shm/ and when should I use /tmp/?

When should I use /dev/shm/ and when should I use /tmp/? Can I always rely on them both being there on Unices?

Asked by: Guest | Views: 237
Total answers/comments: 4
Guest [Entry]

"/dev/shm is a temporary file storage filesystem, i.e., tmpfs, that uses RAM for the backing store. 
It can function as a shared memory implementation that facilitates IPC.

From Wikipedia:

Recent 2.6 Linux kernel builds have started to offer /dev/shm as shared memory in the form of a ramdisk, more specifically as a world-writable directory that is stored in memory with a defined limit in /etc/default/tmpfs. 
/dev/shm support is completely optional within the kernel config file. 
It is included by default in both Fedora and Ubuntu distributions, where it is most extensively used by the Pulseaudio application.
            (Emphasis added.)


/tmp is the location for temporary files as defined in the Filesystem Hierarchy Standard, which is followed by almost all Unix and Linux distributions.

Since RAM is significantly faster than disk storage, you can use /dev/shm instead of /tmp for the performance boost, if your process is I/O intensive and extensively uses temporary files.

To answer your questions: No, you cannot always rely on /dev/shm being present, certainly not on machines strapped for memory. You should use /tmp unless you have a very good reason for using /dev/shm.

Remember that /tmp can be part of the / filesystem instead of a separate mount, and hence can grow as required. The size of /dev/shm is limited by excess RAM on the system, and hence you're more likely to run out of space on this filesystem."
Guest [Entry]

"Okay, here's the reality.

Both tmpfs and a normal filesystem are a memory cache over disk.

The tmpfs uses memory and swapspace as it's backing store a filesystem uses a specific area of disk, neither is limited in the size the filesystem can be, it is quite possible to have a 200GB tmpfs on a machine with less than a GB of ram if you have enough swapspace.

The difference is in when data is written to the disk. For a tmpfs the data is written ONLY when memory gets too full or the data unlikely to be used soon. OTOH most normal Linux filesystems are designed to always have a more or less consistent set of data on the disk so if the user pulls the plug they don't lose everything.

Personally, I'm used to having operating systems that don't crash and UPS systems (eg: laptop batteries) so I think the ext2/3 filesystems are too paranoid with their 5-10 second checkpoint interval. The ext4 filesystem is better with a 10 minute checkpoint, except it treats user data as second class and doesn't protect it. (ext3 is the same but you don't notice it because of the 5 second checkpoint)

This frequent checkpointing means that unnecessary data is being continually written to disk, even for /tmp.

So the result is you need to create swap space as big as you need your /tmp to be (even if you have to create a swapfile) and use that space to mount a tmpfs of the required size onto /tmp.

NEVER use /dev/shm.

Unless, you're using it for very small (probably mmap'd) IPC files and you are sure that it exists (it's not a standard) and the machine has more than enough memory + swap available."
Guest [Entry]

"/dev/shm is used for shared virtual memory system specific device drivers and programs.

If you are creating a program that requires a virtual memory heap that should be mapped to virtual memory. This goes double so if you need multiple processes or threads to be able to safely access that memory.

The fact is that just because the driver uses a special version of tmpfs for it, doesn't mean you should use it as a generic tmpfs partition. Instead, you should just create another tmpfs partition if you want one for your temporary directory."
Guest [Entry]

"In PERL, having 8GB minimum on any machine (all running Linux Mint), I am of what I think is a good habit of doing DB_File-based (data structure in a file) complex algorithms with millions of reads and writes using /dev/shm

In other languages, not having gigether everywhere, to avoid the starts and stops in network transfer (working locally on a file that is located on a server in a client-server atmosphere), using a batch file of some type, I will copy the whole (300-900MB) file at once to /dev/shm, run the program with output to /dev/shm, write the results back to the server, and delete from /dev/shm

Naturally, if I had less RAM, I would not be doing this. Ordinarily, the in-memory file system of /dev/shm reads as a size being one half of your available RAM. However, ordinary use of RAM is constant. So you really couldn't do this on a device with 2GB or less. To turn paraphrase to hyperbole, there is often things in RAM that even the system doesn't report well."