
Its pretty common issue if you have Jenkins and a huge lot of jobs running for CI, CD, polls, Fingerprints and what not. So, Basically if Jenkins/Hudson is the orchestrated Skeleton of the development cycle, one would have come across this issue
Usually below command would have shown and puzzled that there is lot of space left in the server/mount of interest
df -h
But run the below command to get the actual cause of the problem (99% of the case)
df -i
Yeah, correct the fs of interest is showing 100% consumed. So, Let me attempt to explain the issue here, yes the fs has space if it has but inode is consumed, i.e; inode is a logical index number system to track the files stored on the disk. The inode entries store metadata about each file, directory or object, but only points to these structures rather than storing the data.
Lets understand what is the highest number of inode for each volume!
Let me take up only for ext4 system here, would try explain about other popular fs in next stories,
EXT4: is one of the filesystem and most preferred by me for most of its advantages. so when the filesystem is made/ formatted we can define some values according the requirements, if not the system will use default values
ext4 fs by default makes fs with 4096 block_size and 4 times the blocks as the inode value i.e; Lets say
if the Block size is 400 GB
by default you will get 26214400 inodes as 400Gb would be 104857600KB and 1/4th of it is the inode
But potentially we can go upto 9th times of the blocks so
i.e; maximum of 943717914 inodes can be achieves in 400GB of ext4 fs
So fs should be formatted with the required values while setting things up with the configurations
Below command would help, formatting the fs(nvme2n1 is the disk of intrest here below) with the mentioned value
(Don't try to on the live server, you will have a clean disk with nothing left )
mkfs -t ext4 -N 943718914 /dev/nvme2n1