I'm writing simple script in bash, to analyze some logs in real time and wondering about how to approach the fact, that every few seconds I have to find place in the file that I finished reading previous time. Right now I am doing something like this:
LOG_FILE=path_to_file
DELAY=1 #time between refresh
LINES=100 #lines to read at one cycle
LAST=$(tail -n 1 $LOG_FILE)
IFS=$'n'
while true;
do
clear;
found=0
LOG=$(tail -n $LINES $LOG_FILE)
for line in $LOG
do
if [ $line = $LAST ]; then
found=1
continue
fi
if [ $found = 0 ]; then
continue
fi
#Analyzing counting nd stuff.
echo "$stuff"
done
LAST=$line
sleep $DELAY;
done
So every cycle I am fetching some number of lines from the end of file and looking for the one that was the last in the previous run. This will work quite ok until in one cycle more that defined number of lines will be added. I can always say something like LINES=10000
but in this case there will be tousands of useless runs just to determine if yet I found the last line from previous run.
I'm wondering if I can do it somewhat more efficient?
tail -f
- for a simple "waits for another lines" isn't a solution? – jm666 2 mins ago