Is anyone seeing some kind of problem when playing back files, but only when the recording has not yet finished? Possibly just when replaying a file that is currently being extended, maybe by another VDR and/or over NFS.
[i had a problem like that many months ago, attempted to fix it, but it seems i was too successful ;) - the problem went away, but i didn't verify that the change actually helped. By now i don't remember exactly what the symptoms were, and what the vdr setup looked like at the time; hence i can't easily reproduce it. I'm planning on basing another feature on top of the fix and wondering how to handle that...]
artur
On Wed, Mar 26, 2008 at 10:27 AM, Artur Skawina art_k@o2.pl wrote:
Is anyone seeing some kind of problem when playing back files, but only when the recording has not yet finished? Possibly just when replaying a file that is currently being extended, maybe by another VDR and/or over NFS.
[i had a problem like that many months ago, attempted to fix it, but it seems i was too successful ;) - the problem went away, but i didn't verify that the change actually helped. By now i don't remember exactly what the symptoms were, and what the vdr setup looked like at the time; hence i can't easily reproduce it. I'm planning on basing another feature on top of the fix and wondering how to handle that...]
It would help if you mentioned what the problem actually was so people actually know what they're supposed to be looking for. I've played recordings back before they're finished many times and never noticed any problems. *shrug*
VDR User wrote:
On Wed, Mar 26, 2008 at 10:27 AM, Artur Skawina art_k@o2.pl wrote:
Is anyone seeing some kind of problem when playing back files, but only when the recording has not yet finished? Possibly just when replaying a file that is currently being extended, maybe by another VDR and/or over NFS.
[i had a problem like that many months ago, attempted to fix it, but it seems i was too successful ;) - the problem went away, but i didn't verify that the change actually helped. By now i don't remember exactly what the symptoms were, and what the vdr setup looked like at the time; hence i can't easily reproduce it. I'm planning on basing another feature on top of the fix and wondering how to handle that...]
It would help if you mentioned what the problem actually was so people actually know what they're supposed to be looking for. I've played recordings back before they're finished many times and never noticed any problems. *shrug*
kind of hard to describe something you no longer remember... ;) I _think_ the issue was some kind of stuttering playback; it didn't happen always, but often enough that one day i decided to see if writing the stream in much larger chunks would help. It did; never seen it since.
artur
On Wed, Mar 26, 2008 at 12:03 PM, Artur Skawina art_k@o2.pl wrote:
kind of hard to describe something you no longer remember... ;) I _think_ the issue was some kind of stuttering playback; it didn't happen always, but often enough that one day i decided to see if writing the stream in much larger chunks would help. It did; never seen it since.
The only time I've heard about people having stuttering playback of recordings was when DMA wasn't turned on for their harddrive(s). Other then that, I don't know what to tell you. I playback recordings while they're still recording all the time and never experienced any problems. Maybe someone else can comment about this but otherwise it seems to be a dead topic.
On Wednesday 26 March 2008 21:51:06 VDR User wrote:
On Wed, Mar 26, 2008 at 12:03 PM, Artur Skawina art_k@o2.pl wrote:
kind of hard to describe something you no longer remember... ;) I _think_ the issue was some kind of stuttering playback; it didn't happen always, but often enough that one day i decided to see if writing the stream in much larger chunks would help. It did; never seen it since.
The only time I've heard about people having stuttering playback of recordings was when DMA wasn't turned on for their harddrive(s). Other then that, I don't know what to tell you. I playback recordings while they're still recording all the time and never experienced any problems. Maybe someone else can comment about this but otherwise it seems to be a dead topic.
Artur is right. I am using the following VDR setup: 2 VDR machines in client(FF)/server(budget) mode (thank to streamdev) over wifi. The video folder is shared over the network in CIFS (aka Samba). Actually replaying VDR recording is not smooth all the time and the timeshifting on the client side is not working at all. Recording on the server and playing on the client at the same time result in smooth and freeze cycle with lip sync issue. Watching live programs is perfect.
alexw wrote:
I _think_ the issue was some kind of stuttering playback; it didn't happen always, but often enough that one day i decided to see if writing the stream in much larger chunks would help. It did; never seen it since.
I am using the following VDR setup: 2 VDR machines in client(FF)/server(budget) mode (thank to streamdev) over wifi. The video folder is shared over the network in CIFS (aka Samba). Actually replaying VDR
I have a similar setup, just with 100M ethernet instead of wifi and NFS instead of samba. wifi could be a problem, but if you're able to watch the same channel live you should also be able to view a recording.
recording is not smooth all the time and the timeshifting on the client side is not working at all.
Timeshifting on the client requires 3x the bandwith and a lot of data going in both directions, so may not work all that well over wireless.
Recording on the server and playing on the client at the same time result in smooth and freeze cycle with lip sync issue. Watching live programs is perfect.
But this sounds very much like the problem i was seeing. In my case it was dependent on the bitrate and number of simultaneous recordings IIRC.
If you'd like to try a patch, attached is what i've using since May 2007. It could help, but might also not change anything at all; the main reason i'm interested if it helps is because i have O_DIRECT support done on top of this, and am not sure if i should keep the write coalescing for the case where O_DIRECT isn't available.
It contains: - various code cleanups (that i made after Klaus merged the fadvise-related modifications, they make the patch a bit harder to read, but should be otherwise harmless) - tweaks to improve playback (smarter readahead) - tweaks to improve cutting performance (didn't help much iirc) - write coalescing - the stream is written in multi-megabyte chunks instead of a dozen or so K at a time. This is the interesting part; it adds another data copy, but seems to have made the stuttering playback disappear. (this change will only make a difference when applied to the vdr that does the recording, ie the server; the fadvise tweaks mentioned above could also improve things when applied to the client)
Patch made against vdr-1.4.7, but applies to 1.6.0 too.
artur
diff --git a/cutter.c b/cutter.c index 5170ae4..7e2e506 100644 --- a/cutter.c +++ b/cutter.c @@ -66,7 +66,8 @@ void cCuttingThread::Action(void) toFile = toFileName->Open(); if (!fromFile || !toFile) return; - fromFile->SetReadAhead(MEGABYTE(20)); + fromFile->CuttingSrc(); + toFile->CuttingDst(); int Index = Mark->position; Mark = fromMarks.Next(Mark); int FileSize = 0; @@ -91,7 +92,7 @@ void cCuttingThread::Action(void) if (fromIndex->Get(Index++, &FileNumber, &FileOffset, &PictureType, &Length)) { if (FileNumber != CurrentFileNumber) { fromFile = fromFileName->SetOffset(FileNumber, FileOffset); - fromFile->SetReadAhead(MEGABYTE(20)); + fromFile->CuttingSrc(); CurrentFileNumber = FileNumber; } if (fromFile) { @@ -124,6 +125,7 @@ void cCuttingThread::Action(void) error = "toFile 1"; break; } + toFile->CuttingDst(); FileSize = 0; } LastIFrame = 0; @@ -164,6 +166,7 @@ void cCuttingThread::Action(void) error = "toFile 2"; break; } + toFile->CuttingDst(); FileSize = 0; } } diff --git a/tools.c b/tools.c index 255c806..a14f799 100644 --- a/tools.c +++ b/tools.c @@ -1055,7 +1055,8 @@ bool cSafeFile::Close(void)
#define USE_FADVISE
-#define WRITE_BUFFER KILOBYTE(800) +//#define dfsyslog dsyslog // uncomment to turn on fadvise related logging +#define dfsyslog(a...) do {} while (0)
cUnbufferedFile::cUnbufferedFile(void) { @@ -1073,12 +1074,19 @@ int cUnbufferedFile::Open(const char *FileName, int Flags, mode_t Mode) fd = open(FileName, Flags, Mode); curpos = 0; #ifdef USE_FADVISE - begin = lastpos = ahead = 0; - cachedstart = 0; - cachedend = 0; - readahead = KILOBYTE(128); + lastpos = 0; + lastjump = 0; + cachedstart = cachedend = 0; + // 4M readahead seems to work for playback while cutting, + // but isn't quite enough for FF while cutting... + readahead = MEGABYTE(12); + writebuffer = KILOBYTE(2000); written = 0; totwritten = 0; + cutting = 0; + + wbuf = NULL; + if (fd >= 0) posix_fadvise(fd, 0, 0, POSIX_FADV_RANDOM); // we could use POSIX_FADV_SEQUENTIAL, but we do our own readahead, disabling the kernel one. #endif @@ -1087,6 +1095,11 @@ int cUnbufferedFile::Open(const char *FileName, int Flags, mode_t Mode)
int cUnbufferedFile::Close(void) { + if (fd >= 0 && wbuf) { + if (wbuf_len) + WriteBuf(wbuf, wbuf_len); + free(wbuf); + } #ifdef USE_FADVISE if (fd >= 0) { if (totwritten) // if we wrote anything make sure the data has hit the disk before @@ -1107,15 +1120,35 @@ int cUnbufferedFile::Close(void) #define FADVGRAN KILOBYTE(4) // AKA fadvise-chunk-size; PAGE_SIZE or getpagesize(2) would also work. #define READCHUNK MEGABYTE(8)
-void cUnbufferedFile::SetReadAhead(size_t ra) +void cUnbufferedFile::CuttingSrc(void) +{ + readahead = MEGABYTE(20); + cutting = 1; +} + +void cUnbufferedFile::CuttingDst(void) { - readahead = ra; + writebuffer = MEGABYTE(16); // 4M is too little. + cutting = 2; }
int cUnbufferedFile::FadviseDrop(off_t Offset, off_t Len) { - // rounding up the window to make sure that not PAGE_SIZE-aligned data gets freed. - return posix_fadvise(fd, Offset - (FADVGRAN - 1), Len + (FADVGRAN - 1) * 2, POSIX_FADV_DONTNEED); + // Round up the window to make sure that not PAGE_SIZE-aligned data gets freed. + // Note that that also means calling this with Len==0 isn't special (unlike fadvise). + dfsyslog("DROP: fd:%3d %9zd .. %9zd SIZE: %6zd", fd, Offset, Offset+Len, Len); + off_t prewin = min(Offset ,(off_t)FADVGRAN - 1); // must not wrap below 0. + return posix_fadvise(fd, Offset - prewin, Len + prewin + (FADVGRAN - 1), POSIX_FADV_DONTNEED); +} + +// Trigger background readahead on the specified range and add +// it to the "cached" area so that we can drop the data later. +int cUnbufferedFile::FadviseRead(off_t Offset, off_t Len) +{ + dfsyslog("WANT: fd:%3d %9zd .. %9zd SIZE: %6zd", fd, Offset, Offset+Len, Len); + cachedstart = min(cachedstart, Offset); + cachedend = max(cachedend, Offset+Len); + return posix_fadvise(fd, Offset, Len, POSIX_FADV_WILLNEED); }
off_t cUnbufferedFile::Seek(off_t Offset, int Whence) @@ -1131,82 +1164,107 @@ ssize_t cUnbufferedFile::Read(void *Data, size_t Size) if (fd >= 0) { #ifdef USE_FADVISE off_t jumped = curpos-lastpos; // nonzero means we're not at the last offset - if ((cachedstart < cachedend) && (curpos < cachedstart || curpos > cachedend)) { + + dfsyslog("READ: fd:%3d %9zd .. %9zd SIZE: %6zd jump: %9zd ra: %7zd", fd, curpos, curpos+Size, Size, jumped, readahead); + + if (curpos < cachedstart || curpos > cachedend) { // current position is outside the cached window -- invalidate it. - FadviseDrop(cachedstart, cachedend-cachedstart); + if (cachedstart != cachedend) + FadviseDrop(cachedstart, cachedend-cachedstart); cachedstart = curpos; cachedend = curpos; } - cachedstart = min(cachedstart, curpos); #endif ssize_t bytesRead = safe_read(fd, Data, Size); #ifdef USE_FADVISE if (bytesRead > 0) { + cachedstart = min(cachedstart, curpos); curpos += bytesRead; - cachedend = max(cachedend, curpos); - // Read ahead: - // no jump? (allow small forward jump still inside readahead window). - if (jumped >= 0 && jumped <= (off_t)readahead) { - // Trigger the readahead IO, but only if we've used at least - // 1/2 of the previously requested area. This avoids calling - // fadvise() after every read() call. - if (ahead - curpos < (off_t)(readahead / 2)) { - posix_fadvise(fd, curpos, readahead, POSIX_FADV_WILLNEED); - ahead = curpos + readahead; - cachedend = max(cachedend, ahead); - } - if (readahead < Size * 32) { // automagically tune readahead size. - readahead = Size * 32; + // no jump or small forward jump still inside readahead window. + if (jumped >= 0 && curpos <= cachedend) { + // Trigger the readahead IO, but only if we've used at least some of the previously + // requested area. This avoids calling fadvise() after every read() call. + size_t cachedsize = cachedend - curpos; + size_t ra = cachedsize + Size*2 + (size_t)jumped*1; + if (cutting) + ra += KILOBYTE(64); + ra = min(readahead, ra); + // Start I/O if we A) used some of the data or B) can read sufficiently large new chunk. + // (A) is important when starting w/ a small readahead. + if (cachedsize < (ra-ra/4) || cachedsize+KILOBYTE(256) <= ra) + FadviseRead(curpos, ra); + } + else if (jumped >= 0) { // either large forward jump, or FF (jumps by ~4xSize) + FadviseRead(curpos, ((size_t)jumped < Size*8)?(jumped+Size)*2:Size*2); + } + else /*if (jumped < 0)*/ { // backward jump: + // We don't want any readahead, otherwise e.g. fast-rewind gets in trouble. +#if 1 + // But we'll do some read behind for sequential short backward jumps. + size_t rbsize = -jumped * 4; + if (lastjump <= 0 && (size_t)-jumped < Size * 16 && (size_t)((curpos-Size) - cachedstart ) < rbsize) { + // current position has moved back enough, grow tail window. + off_t start = max((off_t)0, (off_t)((curpos - Size) - rbsize)); + FadviseRead(start, rbsize); } +#endif + // We could reduce readahead window here. But this would lead to already + // prefetched data being thrown out by the code below; not exactly ideal + // if this jump was only caused by a play mode transition etc. + //readahead = Size * 8; } - else - ahead = curpos; // jumped -> we really don't want any readahead, otherwise e.g. fast-rewind gets in trouble. + cachedend = max(cachedend, curpos); }
if (cachedstart < cachedend) { - if (curpos - cachedstart > READCHUNK * 2) { + off_t maxtail = cutting ? KILOBYTE(64) : READCHUNK; + off_t maxhead = max(readahead, (size_t)READCHUNK); + if (jumped >= 0 && curpos - cachedstart >= maxtail * 2) { // current position has moved forward enough, shrink tail window. - FadviseDrop(cachedstart, curpos - READCHUNK - cachedstart); - cachedstart = curpos - READCHUNK; + FadviseDrop(cachedstart, (curpos - maxtail) - cachedstart); + cachedstart = curpos - maxtail; } - else if (cachedend > ahead && cachedend - curpos > READCHUNK * 2) { + else if (jumped < 0 && lastjump < 0 && cachedend - curpos >= maxhead * 2) { // current position has moved back enough, shrink head window. - FadviseDrop(curpos + READCHUNK, cachedend - (curpos + READCHUNK)); - cachedend = curpos + READCHUNK; + // (a large readahead value may prevent this) + FadviseDrop(curpos + maxhead, cachedend - (curpos + maxhead)); + cachedend = curpos + maxhead; } } lastpos = curpos; + lastjump = jumped; #endif return bytesRead; } return -1; }
-ssize_t cUnbufferedFile::Write(const void *Data, size_t Size) +ssize_t cUnbufferedFile::WriteBuf(const void *Data, size_t Size) { if (fd >=0) { ssize_t bytesWritten = safe_write(fd, Data, Size); + //dsyslog("WRIT: fd:%3d %9zd .. %9zd SIZE: %6zd", fd, curpos, curpos+Size, Size); #ifdef USE_FADVISE if (bytesWritten > 0) { - begin = min(begin, curpos); + cachedstart = min(cachedstart, curpos); curpos += bytesWritten; written += bytesWritten; - lastpos = max(lastpos, curpos); - if (written > WRITE_BUFFER) { - if (lastpos > begin) { + cachedend = max(cachedend, curpos); + if (written > writebuffer) { + if (cachedend > cachedstart) { // Now do three things: - // 1) Start writeback of begin..lastpos range + // 1) Start writeback of cachedstart..cachedend range // 2) Drop the already written range (by the previous fadvise call) // 3) Handle nonpagealigned data. - // This is why we double the WRITE_BUFFER; the first time around the + // This is why we double the writebuffer; the first time around the // last (partial) page might be skipped, writeback will start only after // second call; the third call will still include this page and finally // drop it from cache. - off_t headdrop = min(begin, WRITE_BUFFER * 2L); - posix_fadvise(fd, begin - headdrop, lastpos - begin + headdrop, POSIX_FADV_DONTNEED); + off_t headdrop = min(cachedstart, (off_t)writebuffer * 2); + posix_fadvise(fd, cachedstart - headdrop, cachedend - cachedstart + headdrop, POSIX_FADV_DONTNEED); } - begin = lastpos = curpos; + cachedstart = cachedend = curpos; totwritten += written; written = 0; // The above fadvise() works when writing slowly (recording), but could @@ -1216,7 +1274,7 @@ ssize_t cUnbufferedFile::Write(const void *Data, size_t Size) // So we do another round of flushing, just like above, but at larger // intervals -- this should catch any pages that couldn't be released // earlier. - if (totwritten > MEGABYTE(32)) { + if (totwritten > MEGABYTE(32) + writebuffer ) { // It seems in some setups, fadvise() does not trigger any I/O and // a fdatasync() call would be required do all the work (reiserfs with some // kind of write gathering enabled), but the syncs cause (io) load.. @@ -1234,6 +1292,26 @@ ssize_t cUnbufferedFile::Write(const void *Data, size_t Size) return -1; }
+ssize_t cUnbufferedFile::Write(const void *Data, size_t Size) +{ + if (!wbuf) { + wbuf_chunk = cutting?MEGABYTE(8):MEGABYTE(4); + wbuf = MALLOC(uchar,wbuf_chunk); + if (!wbuf) + return WriteBuf(Data, Size); + wbuf_len = 0; + } + if (Size <= wbuf_chunk-wbuf_len) { + memcpy(wbuf+wbuf_len, Data, Size); + wbuf_len += Size; + } else { + WriteBuf(wbuf, wbuf_len); + memcpy(wbuf, Data, Size); + wbuf_len = Size; + } + return Size; +} + cUnbufferedFile *cUnbufferedFile::Create(const char *FileName, int Flags, mode_t Mode) { cUnbufferedFile *File = new cUnbufferedFile; diff --git a/tools.h b/tools.h index 1f71ec4..ce7283c 100644 --- a/tools.h +++ b/tools.h @@ -246,19 +246,26 @@ private: off_t curpos; off_t cachedstart; off_t cachedend; - off_t begin; off_t lastpos; - off_t ahead; + off_t lastjump; size_t readahead; + uchar *wbuf; + int wbuf_len; + int wbuf_chunk; size_t written; size_t totwritten; + int cutting; + size_t writebuffer; int FadviseDrop(off_t Offset, off_t Len); + int FadviseRead(off_t Offset, off_t Len); + ssize_t WriteBuf(const void *Data, size_t Size); public: cUnbufferedFile(void); ~cUnbufferedFile(); int Open(const char *FileName, int Flags, mode_t Mode = DEFFILEMODE); int Close(void); - void SetReadAhead(size_t ra); + void CuttingSrc(void); + void CuttingDst(void); off_t Seek(off_t Offset, int Whence); ssize_t Read(void *Data, size_t Size); ssize_t Write(const void *Data, size_t Size);
On Thu, Mar 27, 2008 at 9:19 AM, Artur Skawina art_k@o2.pl wrote:
I have a similar setup, just with 100M ethernet instead of wifi and NFS instead of samba. wifi could be a problem, but if you're able to watch the same channel live you should also be able to view a recording.
Takes a lot more bandwidth to record & playback then just to record so the fact that live tv is fine doesn't amount to much I don't think.
Btw, my setup was similar as well. Nexus-s card with 100 ethernet (wired).
VDR User wrote:
On Thu, Mar 27, 2008 at 9:19 AM, Artur Skawina art_k@o2.pl wrote:
I have a similar setup, just with 100M ethernet instead of wifi and NFS instead of samba. wifi could be a problem, but if you're able to watch the same channel live you should also be able to view a recording.
Takes a lot more bandwidth to record & playback then just to record so the fact that live tv is fine doesn't amount to much I don't think.
I was referring to playing a finished recording and playing a file that is currently being extended by the "server" vdr -- alexw said that doesn't work well for him. It should, unless the disk and/or fs can't handle the two data streams concurrently, while keeping the latency low enough. I'm assuming the vdr server in powerful enough to handle the load, yes.
artur
On Thursday 27 March 2008 18:22:35 Artur Skawina wrote:
VDR User wrote:
On Thu, Mar 27, 2008 at 9:19 AM, Artur Skawina art_k@o2.pl wrote:
I have a similar setup, just with 100M ethernet instead of wifi and NFS instead of samba. wifi could be a problem, but if you're able to watch the same channel live you should also be able to view a recording.
Takes a lot more bandwidth to record & playback then just to record so the fact that live tv is fine doesn't amount to much I don't think.
I was referring to playing a finished recording and playing a file that is currently being extended by the "server" vdr -- alexw said that doesn't work well for him. It should, unless the disk and/or fs can't handle the two data streams concurrently, while keeping the latency low enough. I'm assuming the vdr server in powerful enough to handle the load, yes.
artur
vdr mailing list vdr@linuxtv.org http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr
Hi,
My setup is a little bit more complicated as it is using a share drive on both machine. The two local disks are only CF. The file server is compose of 4x SATA II 500GB in raid 5 (total ~1.5 TB available) piloted by a promise controller and a fully idle 3 GHz P4 CPU. The throughput or the sustained write/read access is not a bottleneck. Here is a quick ASCII art drawing:
/-- 100M --[AP]~ WIFI 54Mb ~[vdr client/FF]-[TV] | [switch]--- 100M ---[CIFS/fileserver] | -- 100M -- [vdr server/B2C2]-DVB-S+DVB-T
This evening I will test the provided patch.
Rgds,
Alex
On Friday 28 March 2008 14:52:38 alexw wrote:
On Thursday 27 March 2008 18:22:35 Artur Skawina wrote:
VDR User wrote:
On Thu, Mar 27, 2008 at 9:19 AM, Artur Skawina art_k@o2.pl wrote:
I have a similar setup, just with 100M ethernet instead of wifi and NFS instead of samba. wifi could be a problem, but if you're able to watch the same channel live you should also be able to view a recording.
Takes a lot more bandwidth to record & playback then just to record so the fact that live tv is fine doesn't amount to much I don't think.
I was referring to playing a finished recording and playing a file that is currently being extended by the "server" vdr -- alexw said that doesn't work well for him. It should, unless the disk and/or fs can't handle the two data streams concurrently, while keeping the latency low enough. I'm assuming the vdr server in powerful enough to handle the load, yes.
artur
vdr mailing list vdr@linuxtv.org http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr
Hi,
My setup is a little bit more complicated as it is using a share drive on both machine. The two local disks are only CF. The file server is compose of 4x SATA II 500GB in raid 5 (total ~1.5 TB available) piloted by a promise controller and a fully idle 3 GHz P4 CPU. The throughput or the sustained write/read access is not a bottleneck. Here is a quick ASCII art drawing:
/-- 100M --[AP]~ WIFI 54Mb ~[vdr client/FF]-[TV]
[switch]--- 100M ---[CIFS/fileserver]
-- 100M -- [vdr server/B2C2]-DVB-S+DVB-T
This evening I will test the provided patch.
Rgds,
Alex
vdr mailing list vdr@linuxtv.org http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr
Using the patch provided by Artur, playing a finished/ongoing recording is smoother than before. Sometimes the player stops in the middle of a recording due to a zero size request. Here is the log:
vdr: [3836] dvbplayer thread started (pid=3643, tid=3836) vdr: [3836] resuming replay at index 1950 (0:01:18.01) vdr: [3837] non blocking file reader thread started (pid=3643, tid=3837) vdr: [3836] SetBrokenLink: no GOP header found in video packet vdr: [3836] setting audio track to 1 (0) vdr: [3836] playing '/var/vdr/video/SERVER/recording/2008-03-28.18.58.50.50.rec/001.vdr'
<<<unexpect stop of replay>>>
vdr: [3837] non blocking file reader thread ended (pid=3643, tid=3837) vdr: [3836] dvbplayer thread ended (pid=3643, tid=3836)
-----------------------------------------------------
vdr: [5618] WANT: fd: 25 1068536495 .. 1068722913 SIZE: 186418 vdr: [5618] READ: fd: 25 1068536495 .. 1068666704 SIZE: 130209 jump: 0 ra: 12582912 vdr: [5618] WANT: fd: 25 1068666704 .. 1068983331 SIZE: 316627 vdr: [5618] READ: fd: 25 1068666704 .. 1068680058 SIZE: 13354 jump: 0 ra: 12582912 vdr: [5618] READ: fd: 25 1068680058 .. 1068690344 SIZE: 10286 jump: 0 ra: 12582912 vdr: [5618] READ: fd: 25 1068690344 .. 1068721839 SIZE: 31495 jump: 0 ra: 12582912 vdr: [5618] READ: fd: 25 1068721839 .. 1069246127 SIZE: 524288 jump: 0 ra: 12582912 vdr: [5618] WANT: fd: 25 1069246127 .. 1070294703 SIZE: 1048576 vdr: [5618] READ: fd: 25 1069246127 .. 1069246127 SIZE: 0 jump: 0 ra: 12582912 vdr: [5618] non blocking file reader thread ended (pid=5563, tid=5618) vdr: [5617] dvbplayer thread ended (pid=5563, tid=5617)
-----------------------------------------------------
vdr: [5627] READ: fd: 25 1188203807 .. 1188214024 SIZE: 10217 jump: 0 ra: 12582912 vdr: [5627] READ: fd: 25 1188214024 .. 1188226152 SIZE: 12128 jump: 0 ra: 12582912 vdr: [5627] READ: fd: 25 1188226152 .. 1188267411 SIZE: 41259 jump: 0 ra: 12582912 vdr: [5627] WANT: fd: 25 1188267411 .. 1188503927 SIZE: 236516 vdr: [5627] READ: fd: 25 1188267411 .. 1188278781 SIZE: 11370 jump: 0 ra: 12582912 vdr: [5627] READ: fd: 25 1188278781 .. 1188292400 SIZE: 13619 jump: 0 ra: 12582912 vdr: [5627] READ: fd: 25 1188292400 .. 1188335024 SIZE: 42624 jump: 0 ra: 12582912 vdr: [5627] WANT: fd: 25 1188335024 .. 1188589175 SIZE: 254151 vdr: [5627] READ: fd: 25 1188335024 .. 1188344461 SIZE: 9437 jump: 0 ra: 12582912 vdr: [5627] READ: fd: 25 1188344461 .. 1188365793 SIZE: 21332 jump: 0 ra: 12582912 vdr: [5627] READ: fd: 25 1188365793 .. 1188459990 SIZE: 94197 jump: 0 ra: 12582912 vdr: [5627] WANT: fd: 25 1188459990 .. 1188777569 SIZE: 317579 vdr: [5627] READ: fd: 25 1188459990 .. 1188473626 SIZE: 13636 jump: 0 ra: 12582912 vdr: [5627] READ: fd: 25 1188473626 .. 1188489407 SIZE: 15781 jump: 0 ra: 12582912 vdr: [5627] READ: fd: 25 1188489407 .. 1188531493 SIZE: 42086 jump: 0 ra: 12582912 vdr: [5627] WANT: fd: 25 1188531493 .. 1188861741 SIZE: 330248 vdr: [5627] READ: fd: 25 1188531493 .. 1188543845 SIZE: 12352 jump: 0 ra: 12582912 vdr: [5627] READ: fd: 25 1188543845 .. 1188554106 SIZE: 10261 jump: 0 ra: 12582912 vdr: [5627] READ: fd: 25 1188554106 .. 1188594505 SIZE: 40399 jump: 0 ra: 12582912 vdr: [5627] READ: fd: 25 1188594505 .. 1188605731 SIZE: 11226 jump: 0 ra: 12582912 vdr: [5627] READ: fd: 25 1188605731 .. 1188616808 SIZE: 11077 jump: 0 ra: 12582912 vdr: [5627] READ: fd: 25 1188616808 .. 1189141096 SIZE: 524288 jump: 0 ra: 12582912 vdr: [5627] WANT: fd: 25 1189141096 .. 1190189672 SIZE: 1048576 vdr: [5627] READ: fd: 25 1189141096 .. 1189141096 SIZE: 0 jump: 0 ra: 12582912 vdr: [5627] non blocking file reader thread ended (pid=5563, tid=5627)
As you can see the requested size is increasing until it reaches the max buf. This is also a period with freezes in the video (late delivery).
Cheers,
Alex
alexw wrote:
On Friday 28 March 2008 14:52:38 alexw wrote:
On Thursday 27 March 2008 18:22:35 Artur Skawina wrote:
VDR User wrote:
On Thu, Mar 27, 2008 at 9:19 AM, Artur Skawina art_k@o2.pl wrote:
I have a similar setup, just with 100M ethernet instead of wifi and NFS instead of samba. wifi could be a problem, but if you're able to watch the same channel live you should also be able to view a recording.
Takes a lot more bandwidth to record & playback then just to record so the fact that live tv is fine doesn't amount to much I don't think.
I was referring to playing a finished recording and playing a file that is currently being extended by the "server" vdr -- alexw said that doesn't work well for him. It should, unless the disk and/or fs can't handle the two data streams concurrently, while keeping the latency low enough. I'm assuming the vdr server in powerful enough to handle the load, yes.
My setup is a little bit more complicated as it is using a share drive on both machine. The two local disks are only CF. The file server is compose of 4x SATA II 500GB in raid 5 (total ~1.5 TB available) piloted by a promise controller and a fully idle 3 GHz P4 CPU. The throughput or the sustained write/read access is not a bottleneck. Here is a quick ASCII art drawing:
/-- 100M --[AP]~ WIFI 54Mb ~[vdr client/FF]-[TV]
[switch]--- 100M ---[CIFS/fileserver]
-- 100M -- [vdr server/B2C2]-DVB-S+DVB-T
This evening I will test the provided patch.
Using the patch provided by Artur, playing a finished/ongoing recording is smoother than before.
Thank you for testing (and for enabling the extra logging).
Does "smoother than before" mean that there is some improvement, but the playback still isn't perfect?
The fact that the recorder hangs on to the last 4M of the stream and the data is appended in large chunks can cause problems when timeshifting by just a few seconds -- the player could try to access the not yet written data. This should only happen at the very end of an ongoing recording, within 4M of EOF (depending on the bitrate usually a couple of seconds after the live program arrived).
Sometimes the player stops in the middle of a recording due to a zero size request. Here is the log:
vdr: [3836] dvbplayer thread started (pid=3643, tid=3836) vdr: [3836] resuming replay at index 1950 (0:01:18.01) vdr: [3837] non blocking file reader thread started (pid=3643, tid=3837) vdr: [3836] SetBrokenLink: no GOP header found in video packet vdr: [3836] setting audio track to 1 (0) vdr: [3836] playing '/var/vdr/video/SERVER/recording/2008-03-28.18.58.50.50.rec/001.vdr'
<<<unexpect stop of replay>>>
vdr: [3837] non blocking file reader thread ended (pid=3643, tid=3837) vdr: [3836] dvbplayer thread ended (pid=3643, tid=3836)
vdr: [5618] WANT: fd: 25 1068536495 .. 1068722913 SIZE: 186418 vdr: [5618] READ: fd: 25 1068536495 .. 1068666704 SIZE: 130209 jump: 0 ra: 12582912 vdr: [5618] WANT: fd: 25 1068666704 .. 1068983331 SIZE: 316627 vdr: [5618] READ: fd: 25 1068666704 .. 1068680058 SIZE: 13354 jump: 0 ra: 12582912 vdr: [5618] READ: fd: 25 1068680058 .. 1068690344 SIZE: 10286 jump: 0 ra: 12582912 vdr: [5618] READ: fd: 25 1068690344 .. 1068721839 SIZE: 31495 jump: 0 ra: 12582912 vdr: [5618] READ: fd: 25 1068721839 .. 1069246127 SIZE: 524288 jump: 0 ra: 12582912 vdr: [5618] WANT: fd: 25 1069246127 .. 1070294703 SIZE: 1048576 vdr: [5618] READ: fd: 25 1069246127 .. 1069246127 SIZE: 0 jump: 0 ra: 12582912 vdr: [5618] non blocking file reader thread ended (pid=5563, tid=5618) vdr: [5617] dvbplayer thread ended (pid=5563, tid=5617)
Weird, cUnbufferedFile::Read(Size=0). I'll try to reproduce this.
vdr: [5627] READ: fd: 25 1188203807 .. 1188214024 SIZE: 10217 jump: 0 ra: 12582912 vdr: [5627] READ: fd: 25 1188214024 .. 1188226152 SIZE: 12128 jump: 0 ra: 12582912 vdr: [5627] READ: fd: 25 1188226152 .. 1188267411 SIZE: 41259 jump: 0 ra: 12582912 vdr: [5627] WANT: fd: 25 1188267411 .. 1188503927 SIZE: 236516 vdr: [5627] READ: fd: 25 1188267411 .. 1188278781 SIZE: 11370 jump: 0 ra: 12582912 vdr: [5627] READ: fd: 25 1188278781 .. 1188292400 SIZE: 13619 jump: 0 ra: 12582912 vdr: [5627] READ: fd: 25 1188292400 .. 1188335024 SIZE: 42624 jump: 0 ra: 12582912 vdr: [5627] WANT: fd: 25 1188335024 .. 1188589175 SIZE: 254151 vdr: [5627] READ: fd: 25 1188335024 .. 1188344461 SIZE: 9437 jump: 0 ra: 12582912 vdr: [5627] READ: fd: 25 1188344461 .. 1188365793 SIZE: 21332 jump: 0 ra: 12582912 vdr: [5627] READ: fd: 25 1188365793 .. 1188459990 SIZE: 94197 jump: 0 ra: 12582912 vdr: [5627] WANT: fd: 25 1188459990 .. 1188777569 SIZE: 317579 vdr: [5627] READ: fd: 25 1188459990 .. 1188473626 SIZE: 13636 jump: 0 ra: 12582912 vdr: [5627] READ: fd: 25 1188473626 .. 1188489407 SIZE: 15781 jump: 0 ra: 12582912 vdr: [5627] READ: fd: 25 1188489407 .. 1188531493 SIZE: 42086 jump: 0 ra: 12582912 vdr: [5627] WANT: fd: 25 1188531493 .. 1188861741 SIZE: 330248 vdr: [5627] READ: fd: 25 1188531493 .. 1188543845 SIZE: 12352 jump: 0 ra: 12582912 vdr: [5627] READ: fd: 25 1188543845 .. 1188554106 SIZE: 10261 jump: 0 ra: 12582912 vdr: [5627] READ: fd: 25 1188554106 .. 1188594505 SIZE: 40399 jump: 0 ra: 12582912 vdr: [5627] READ: fd: 25 1188594505 .. 1188605731 SIZE: 11226 jump: 0 ra: 12582912 vdr: [5627] READ: fd: 25 1188605731 .. 1188616808 SIZE: 11077 jump: 0 ra: 12582912 vdr: [5627] READ: fd: 25 1188616808 .. 1189141096 SIZE: 524288 jump: 0 ra: 12582912 vdr: [5627] WANT: fd: 25 1189141096 .. 1190189672 SIZE: 1048576 vdr: [5627] READ: fd: 25 1189141096 .. 1189141096 SIZE: 0 jump: 0 ra: 12582912 vdr: [5627] non blocking file reader thread ended (pid=5563, tid=5627)
As you can see the requested size is increasing until it reaches the max buf. This is also a period with freezes in the video (late delivery).
Do these problems (0-sized reads) occur only near the end of a program being recorded?
Also, I see from the above that the readahead code needs to be more aggressive:
vdr: [5627] WANT: fd: 25 1188531493 .. 1188861741 SIZE: 330248
[... small reads...]
vdr: [5627] READ: fd: 25 1188616808 .. 1189141096 SIZE: 524288 jump: 0 ra: 12582912
the readahead window does not cover the area which is being read later -- this certainly is likely to stall playback. I'll fix this (i did not expect such a large difference in read request sizes.)
Thanks,
artur
Artur Skawina a écrit :
alexw wrote:
On Friday 28 March 2008 14:52:38 alexw wrote:
On Thursday 27 March 2008 18:22:35 Artur Skawina wrote:
VDR User wrote:
On Thu, Mar 27, 2008 at 9:19 AM, Artur Skawina art_k@o2.pl wrote:
I have a similar setup, just with 100M ethernet instead of wifi and NFS instead of samba. wifi could be a problem, but if you're able to watch the same channel live you should also be able to view a recording.
Takes a lot more bandwidth to record & playback then just to record so the fact that live tv is fine doesn't amount to much I don't think.
I was referring to playing a finished recording and playing a file that is currently being extended by the "server" vdr -- alexw said that doesn't work well for him. It should, unless the disk and/or fs can't handle the two data streams concurrently, while keeping the latency low enough. I'm assuming the vdr server in powerful enough to handle the load, yes.
My setup is a little bit more complicated as it is using a share drive on both machine. The two local disks are only CF. The file server is compose of 4x SATA II 500GB in raid 5 (total ~1.5 TB available) piloted by a promise controller and a fully idle 3 GHz P4 CPU. The throughput or the sustained write/read access is not a bottleneck. Here is a quick ASCII art drawing:
/-- 100M --[AP]~ WIFI 54Mb ~[vdr client/FF]-[TV]
[switch]--- 100M ---[CIFS/fileserver]
-- 100M -- [vdr server/B2C2]-DVB-S+DVB-T
This evening I will test the provided patch.
Using the patch provided by Artur, playing a finished/ongoing recording is smoother than before.
Thank you for testing (and for enabling the extra logging).
Does "smoother than before" mean that there is some improvement, but the playback still isn't perfect?
The fact that the recorder hangs on to the last 4M of the stream and the data is appended in large chunks can cause problems when timeshifting by just a few seconds -- the player could try to access the not yet written data. This should only happen at the very end of an ongoing recording, within 4M of EOF (depending on the bitrate usually a couple of seconds after the live program arrived).
The playback is really improved, especially when doing timeshift recording. I will try to identify the reason why I am having (sporadic) freezes.
Sometimes the player stops in the middle of a recording due to a zero size request. Here is the log:
vdr: [3836] dvbplayer thread started (pid=3643, tid=3836) vdr: [3836] resuming replay at index 1950 (0:01:18.01) vdr: [3837] non blocking file reader thread started (pid=3643, tid=3837) vdr: [3836] SetBrokenLink: no GOP header found in video packet vdr: [3836] setting audio track to 1 (0) vdr: [3836] playing '/var/vdr/video/SERVER/recording/2008-03-28.18.58.50.50.rec/001.vdr'
<<<unexpect stop of replay>>>
vdr: [3837] non blocking file reader thread ended (pid=3643, tid=3837) vdr: [3836] dvbplayer thread ended (pid=3643, tid=3836)
vdr: [5618] WANT: fd: 25 1068536495 .. 1068722913 SIZE: 186418 vdr: [5618] READ: fd: 25 1068536495 .. 1068666704 SIZE: 130209 jump: 0 ra: 12582912 vdr: [5618] WANT: fd: 25 1068666704 .. 1068983331 SIZE: 316627 vdr: [5618] READ: fd: 25 1068666704 .. 1068680058 SIZE: 13354 jump: 0 ra: 12582912 vdr: [5618] READ: fd: 25 1068680058 .. 1068690344 SIZE: 10286 jump: 0 ra: 12582912 vdr: [5618] READ: fd: 25 1068690344 .. 1068721839 SIZE: 31495 jump: 0 ra: 12582912 vdr: [5618] READ: fd: 25 1068721839 .. 1069246127 SIZE: 524288 jump: 0 ra: 12582912 vdr: [5618] WANT: fd: 25 1069246127 .. 1070294703 SIZE: 1048576 vdr: [5618] READ: fd: 25 1069246127 .. 1069246127 SIZE: 0 jump: 0 ra: 12582912 vdr: [5618] non blocking file reader thread ended (pid=5563, tid=5618) vdr: [5617] dvbplayer thread ended (pid=5563, tid=5617)
Weird, cUnbufferedFile::Read(Size=0). I'll try to reproduce this.
Sometimes it take a long time to occur, sometimes not.
vdr: [5627] READ: fd: 25 1188203807 .. 1188214024 SIZE: 10217 jump: 0 ra: 12582912 vdr: [5627] READ: fd: 25 1188214024 .. 1188226152 SIZE: 12128 jump: 0 ra: 12582912 vdr: [5627] READ: fd: 25 1188226152 .. 1188267411 SIZE: 41259 jump: 0 ra: 12582912 vdr: [5627] WANT: fd: 25 1188267411 .. 1188503927 SIZE: 236516 vdr: [5627] READ: fd: 25 1188267411 .. 1188278781 SIZE: 11370 jump: 0 ra: 12582912 vdr: [5627] READ: fd: 25 1188278781 .. 1188292400 SIZE: 13619 jump: 0 ra: 12582912 vdr: [5627] READ: fd: 25 1188292400 .. 1188335024 SIZE: 42624 jump: 0 ra: 12582912 vdr: [5627] WANT: fd: 25 1188335024 .. 1188589175 SIZE: 254151 vdr: [5627] READ: fd: 25 1188335024 .. 1188344461 SIZE: 9437 jump: 0 ra: 12582912 vdr: [5627] READ: fd: 25 1188344461 .. 1188365793 SIZE: 21332 jump: 0 ra: 12582912 vdr: [5627] READ: fd: 25 1188365793 .. 1188459990 SIZE: 94197 jump: 0 ra: 12582912 vdr: [5627] WANT: fd: 25 1188459990 .. 1188777569 SIZE: 317579 vdr: [5627] READ: fd: 25 1188459990 .. 1188473626 SIZE: 13636 jump: 0 ra: 12582912 vdr: [5627] READ: fd: 25 1188473626 .. 1188489407 SIZE: 15781 jump: 0 ra: 12582912 vdr: [5627] READ: fd: 25 1188489407 .. 1188531493 SIZE: 42086 jump: 0 ra: 12582912 vdr: [5627] WANT: fd: 25 1188531493 .. 1188861741 SIZE: 330248 vdr: [5627] READ: fd: 25 1188531493 .. 1188543845 SIZE: 12352 jump: 0 ra: 12582912 vdr: [5627] READ: fd: 25 1188543845 .. 1188554106 SIZE: 10261 jump: 0 ra: 12582912 vdr: [5627] READ: fd: 25 1188554106 .. 1188594505 SIZE: 40399 jump: 0 ra: 12582912 vdr: [5627] READ: fd: 25 1188594505 .. 1188605731 SIZE: 11226 jump: 0 ra: 12582912 vdr: [5627] READ: fd: 25 1188605731 .. 1188616808 SIZE: 11077 jump: 0 ra: 12582912 vdr: [5627] READ: fd: 25 1188616808 .. 1189141096 SIZE: 524288 jump: 0 ra: 12582912 vdr: [5627] WANT: fd: 25 1189141096 .. 1190189672 SIZE: 1048576 vdr: [5627] READ: fd: 25 1189141096 .. 1189141096 SIZE: 0 jump: 0 ra: 12582912 vdr: [5627] non blocking file reader thread ended (pid=5563, tid=5627)
As you can see the requested size is increasing until it reaches the max buf. This is also a period with freezes in the video (late delivery).
Do these problems (0-sized reads) occur only near the end of a program being recorded?
No, you can experience a stop in the middle of a recording.
Also, I see from the above that the readahead code needs to be more aggressive:
vdr: [5627] WANT: fd: 25 1188531493 .. 1188861741 SIZE: 330248
[... small reads...]
vdr: [5627] READ: fd: 25 1188616808 .. 1189141096 SIZE: 524288 jump: 0 ra: 12582912
the readahead window does not cover the area which is being read later -- this certainly is likely to stall playback. I'll fix this (i did not expect such a large difference in read request sizes.)
I am impatient to test the next release of your patch. Thank you for providing it.
Cheers,
Alex
Thanks,
artur
vdr mailing list vdr@linuxtv.org http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr
Alexw wrote:
Sometimes the player stops in the middle of a recording due to a zero size request. Here is the log:
vdr: [3836] dvbplayer thread started (pid=3643, tid=3836) vdr: [3836] resuming replay at index 1950 (0:01:18.01) vdr: [3837] non blocking file reader thread started (pid=3643, tid=3837) vdr: [3836] SetBrokenLink: no GOP header found in video packet vdr: [3836] setting audio track to 1 (0) vdr: [3836] playing '/var/vdr/video/SERVER/recording/2008-03-28.18.58.50.50.rec/001.vdr'
<<<unexpect stop of replay>>>
vdr: [3837] non blocking file reader thread ended (pid=3643, tid=3837) vdr: [3836] dvbplayer thread ended (pid=3643, tid=3836)
vdr: [5618] WANT: fd: 25 1068536495 .. 1068722913 SIZE: 186418 vdr: [5618] READ: fd: 25 1068536495 .. 1068666704 SIZE: 130209 jump: 0 ra: 12582912 vdr: [5618] WANT: fd: 25 1068666704 .. 1068983331 SIZE: 316627 vdr: [5618] READ: fd: 25 1068666704 .. 1068680058 SIZE: 13354 jump: 0 ra: 12582912 vdr: [5618] READ: fd: 25 1068680058 .. 1068690344 SIZE: 10286 jump: 0 ra: 12582912 vdr: [5618] READ: fd: 25 1068690344 .. 1068721839 SIZE: 31495 jump: 0 ra: 12582912 vdr: [5618] READ: fd: 25 1068721839 .. 1069246127 SIZE: 524288 jump: 0 ra: 12582912 vdr: [5618] WANT: fd: 25 1069246127 .. 1070294703 SIZE: 1048576 vdr: [5618] READ: fd: 25 1069246127 .. 1069246127 SIZE: 0 jump: 0 ra: 12582912 vdr: [5618] non blocking file reader thread ended (pid=5563, tid=5618) vdr: [5617] dvbplayer thread ended (pid=5563, tid=5617)
Weird, cUnbufferedFile::Read(Size=0). I'll try to reproduce this.
Sometimes it take a long time to occur, sometimes not.
Did this start after applying my patch, or did it happen in the past too? Does it always happen at a certain position? Specific stream or bitrate? I don't recall ever having a similar problem, the number 524288 looks a bit suspicious...
As you can see the requested size is increasing until it reaches the max buf. This is also a period with freezes in the video (late delivery).
Do these problems (0-sized reads) occur only near the end of a program being recorded?
No, you can experience a stop in the middle of a recording.
Also, I see from the above that the readahead code needs to be more aggressive:
vdr: [5627] WANT: fd: 25 1188531493 .. 1188861741 SIZE: 330248
[... small reads...]
vdr: [5627] READ: fd: 25 1188616808 .. 1189141096 SIZE: 524288 jump: 0 ra: 12582912
the readahead window does not cover the area which is being read later -- this certainly is likely to stall playback. I'll fix this (i did not expect such a large difference in read request sizes.)
The attached patch makes the readahead window grow much faster, this will cause more I/O at the start of playback, but should handle cases like the one above better. If it works correctly all the ranges mentioned in "READ:" lines should be inside the preceding "WANT:" range and the playback shouldn't stall. Here the readahead window grows to ~5Mbytes just after starting playback, i still need to check that this is not too fast, doesn't saturate the disk and/or link and cause delays when jumping etc. Tested by playing a few files from an NFS mount, didn't notice any problems so far.
An incremental patch would look like this (the attached one (vs 1.4.7) already includes it):
diff --git a/tools.c b/tools.c index a14f799..e22614f 100644 --- a/tools.c +++ b/tools.c @@ -1186,13 +1186,13 @@ ssize_t cUnbufferedFile::Read(void *Data, size_t Size) // Trigger the readahead IO, but only if we've used at least some of the previously // requested area. This avoids calling fadvise() after every read() call. size_t cachedsize = cachedend - curpos; - size_t ra = cachedsize + Size*2 + (size_t)jumped*1; + size_t ra = cachedsize + Size*8 + (size_t)jumped*1; if (cutting) ra += KILOBYTE(64); ra = min(readahead, ra); // Start I/O if we A) used some of the data or B) can read sufficiently large new chunk. // (A) is important when starting w/ a small readahead. - if (cachedsize < (ra-ra/4) || cachedsize+KILOBYTE(256) <= ra) + if (cachedsize < (ra-ra/16) || cachedsize+KILOBYTE(256) <= ra) FadviseRead(curpos, ra); } else if (jumped >= 0) { // either large forward jump, or FF (jumps by ~4xSize)
artur
diff --git a/cutter.c b/cutter.c index 5170ae4..7e2e506 100644 --- a/cutter.c +++ b/cutter.c @@ -66,7 +66,8 @@ void cCuttingThread::Action(void) toFile = toFileName->Open(); if (!fromFile || !toFile) return; - fromFile->SetReadAhead(MEGABYTE(20)); + fromFile->CuttingSrc(); + toFile->CuttingDst(); int Index = Mark->position; Mark = fromMarks.Next(Mark); int FileSize = 0; @@ -91,7 +92,7 @@ void cCuttingThread::Action(void) if (fromIndex->Get(Index++, &FileNumber, &FileOffset, &PictureType, &Length)) { if (FileNumber != CurrentFileNumber) { fromFile = fromFileName->SetOffset(FileNumber, FileOffset); - fromFile->SetReadAhead(MEGABYTE(20)); + fromFile->CuttingSrc(); CurrentFileNumber = FileNumber; } if (fromFile) { @@ -124,6 +125,7 @@ void cCuttingThread::Action(void) error = "toFile 1"; break; } + toFile->CuttingDst(); FileSize = 0; } LastIFrame = 0; @@ -164,6 +166,7 @@ void cCuttingThread::Action(void) error = "toFile 2"; break; } + toFile->CuttingDst(); FileSize = 0; } } diff --git a/tools.c b/tools.c index 255c806..e22614f 100644 --- a/tools.c +++ b/tools.c @@ -1055,7 +1055,8 @@ bool cSafeFile::Close(void)
#define USE_FADVISE
-#define WRITE_BUFFER KILOBYTE(800) +//#define dfsyslog dsyslog // uncomment to turn on fadvise related logging +#define dfsyslog(a...) do {} while (0)
cUnbufferedFile::cUnbufferedFile(void) { @@ -1073,12 +1074,19 @@ int cUnbufferedFile::Open(const char *FileName, int Flags, mode_t Mode) fd = open(FileName, Flags, Mode); curpos = 0; #ifdef USE_FADVISE - begin = lastpos = ahead = 0; - cachedstart = 0; - cachedend = 0; - readahead = KILOBYTE(128); + lastpos = 0; + lastjump = 0; + cachedstart = cachedend = 0; + // 4M readahead seems to work for playback while cutting, + // but isn't quite enough for FF while cutting... + readahead = MEGABYTE(12); + writebuffer = KILOBYTE(2000); written = 0; totwritten = 0; + cutting = 0; + + wbuf = NULL; + if (fd >= 0) posix_fadvise(fd, 0, 0, POSIX_FADV_RANDOM); // we could use POSIX_FADV_SEQUENTIAL, but we do our own readahead, disabling the kernel one. #endif @@ -1087,6 +1095,11 @@ int cUnbufferedFile::Open(const char *FileName, int Flags, mode_t Mode)
int cUnbufferedFile::Close(void) { + if (fd >= 0 && wbuf) { + if (wbuf_len) + WriteBuf(wbuf, wbuf_len); + free(wbuf); + } #ifdef USE_FADVISE if (fd >= 0) { if (totwritten) // if we wrote anything make sure the data has hit the disk before @@ -1107,15 +1120,35 @@ int cUnbufferedFile::Close(void) #define FADVGRAN KILOBYTE(4) // AKA fadvise-chunk-size; PAGE_SIZE or getpagesize(2) would also work. #define READCHUNK MEGABYTE(8)
-void cUnbufferedFile::SetReadAhead(size_t ra) +void cUnbufferedFile::CuttingSrc(void) +{ + readahead = MEGABYTE(20); + cutting = 1; +} + +void cUnbufferedFile::CuttingDst(void) { - readahead = ra; + writebuffer = MEGABYTE(16); // 4M is too little. + cutting = 2; }
int cUnbufferedFile::FadviseDrop(off_t Offset, off_t Len) { - // rounding up the window to make sure that not PAGE_SIZE-aligned data gets freed. - return posix_fadvise(fd, Offset - (FADVGRAN - 1), Len + (FADVGRAN - 1) * 2, POSIX_FADV_DONTNEED); + // Round up the window to make sure that not PAGE_SIZE-aligned data gets freed. + // Note that that also means calling this with Len==0 isn't special (unlike fadvise). + dfsyslog("DROP: fd:%3d %9zd .. %9zd SIZE: %6zd", fd, Offset, Offset+Len, Len); + off_t prewin = min(Offset ,(off_t)FADVGRAN - 1); // must not wrap below 0. + return posix_fadvise(fd, Offset - prewin, Len + prewin + (FADVGRAN - 1), POSIX_FADV_DONTNEED); +} + +// Trigger background readahead on the specified range and add +// it to the "cached" area so that we can drop the data later. +int cUnbufferedFile::FadviseRead(off_t Offset, off_t Len) +{ + dfsyslog("WANT: fd:%3d %9zd .. %9zd SIZE: %6zd", fd, Offset, Offset+Len, Len); + cachedstart = min(cachedstart, Offset); + cachedend = max(cachedend, Offset+Len); + return posix_fadvise(fd, Offset, Len, POSIX_FADV_WILLNEED); }
off_t cUnbufferedFile::Seek(off_t Offset, int Whence) @@ -1131,82 +1164,107 @@ ssize_t cUnbufferedFile::Read(void *Data, size_t Size) if (fd >= 0) { #ifdef USE_FADVISE off_t jumped = curpos-lastpos; // nonzero means we're not at the last offset - if ((cachedstart < cachedend) && (curpos < cachedstart || curpos > cachedend)) { + + dfsyslog("READ: fd:%3d %9zd .. %9zd SIZE: %6zd jump: %9zd ra: %7zd", fd, curpos, curpos+Size, Size, jumped, readahead); + + if (curpos < cachedstart || curpos > cachedend) { // current position is outside the cached window -- invalidate it. - FadviseDrop(cachedstart, cachedend-cachedstart); + if (cachedstart != cachedend) + FadviseDrop(cachedstart, cachedend-cachedstart); cachedstart = curpos; cachedend = curpos; } - cachedstart = min(cachedstart, curpos); #endif ssize_t bytesRead = safe_read(fd, Data, Size); #ifdef USE_FADVISE if (bytesRead > 0) { + cachedstart = min(cachedstart, curpos); curpos += bytesRead; - cachedend = max(cachedend, curpos); - // Read ahead: - // no jump? (allow small forward jump still inside readahead window). - if (jumped >= 0 && jumped <= (off_t)readahead) { - // Trigger the readahead IO, but only if we've used at least - // 1/2 of the previously requested area. This avoids calling - // fadvise() after every read() call. - if (ahead - curpos < (off_t)(readahead / 2)) { - posix_fadvise(fd, curpos, readahead, POSIX_FADV_WILLNEED); - ahead = curpos + readahead; - cachedend = max(cachedend, ahead); - } - if (readahead < Size * 32) { // automagically tune readahead size. - readahead = Size * 32; + // no jump or small forward jump still inside readahead window. + if (jumped >= 0 && curpos <= cachedend) { + // Trigger the readahead IO, but only if we've used at least some of the previously + // requested area. This avoids calling fadvise() after every read() call. + size_t cachedsize = cachedend - curpos; + size_t ra = cachedsize + Size*8 + (size_t)jumped*1; + if (cutting) + ra += KILOBYTE(64); + ra = min(readahead, ra); + // Start I/O if we A) used some of the data or B) can read sufficiently large new chunk. + // (A) is important when starting w/ a small readahead. + if (cachedsize < (ra-ra/16) || cachedsize+KILOBYTE(256) <= ra) + FadviseRead(curpos, ra); + } + else if (jumped >= 0) { // either large forward jump, or FF (jumps by ~4xSize) + FadviseRead(curpos, ((size_t)jumped < Size*8)?(jumped+Size)*2:Size*2); + } + else /*if (jumped < 0)*/ { // backward jump: + // We don't want any readahead, otherwise e.g. fast-rewind gets in trouble. +#if 1 + // But we'll do some read behind for sequential short backward jumps. + size_t rbsize = -jumped * 4; + if (lastjump <= 0 && (size_t)-jumped < Size * 16 && (size_t)((curpos-Size) - cachedstart ) < rbsize) { + // current position has moved back enough, grow tail window. + off_t start = max((off_t)0, (off_t)((curpos - Size) - rbsize)); + FadviseRead(start, rbsize); } +#endif + // We could reduce readahead window here. But this would lead to already + // prefetched data being thrown out by the code below; not exactly ideal + // if this jump was only caused by a play mode transition etc. + //readahead = Size * 8; } - else - ahead = curpos; // jumped -> we really don't want any readahead, otherwise e.g. fast-rewind gets in trouble. + cachedend = max(cachedend, curpos); }
if (cachedstart < cachedend) { - if (curpos - cachedstart > READCHUNK * 2) { + off_t maxtail = cutting ? KILOBYTE(64) : READCHUNK; + off_t maxhead = max(readahead, (size_t)READCHUNK); + if (jumped >= 0 && curpos - cachedstart >= maxtail * 2) { // current position has moved forward enough, shrink tail window. - FadviseDrop(cachedstart, curpos - READCHUNK - cachedstart); - cachedstart = curpos - READCHUNK; + FadviseDrop(cachedstart, (curpos - maxtail) - cachedstart); + cachedstart = curpos - maxtail; } - else if (cachedend > ahead && cachedend - curpos > READCHUNK * 2) { + else if (jumped < 0 && lastjump < 0 && cachedend - curpos >= maxhead * 2) { // current position has moved back enough, shrink head window. - FadviseDrop(curpos + READCHUNK, cachedend - (curpos + READCHUNK)); - cachedend = curpos + READCHUNK; + // (a large readahead value may prevent this) + FadviseDrop(curpos + maxhead, cachedend - (curpos + maxhead)); + cachedend = curpos + maxhead; } } lastpos = curpos; + lastjump = jumped; #endif return bytesRead; } return -1; }
-ssize_t cUnbufferedFile::Write(const void *Data, size_t Size) +ssize_t cUnbufferedFile::WriteBuf(const void *Data, size_t Size) { if (fd >=0) { ssize_t bytesWritten = safe_write(fd, Data, Size); + //dsyslog("WRIT: fd:%3d %9zd .. %9zd SIZE: %6zd", fd, curpos, curpos+Size, Size); #ifdef USE_FADVISE if (bytesWritten > 0) { - begin = min(begin, curpos); + cachedstart = min(cachedstart, curpos); curpos += bytesWritten; written += bytesWritten; - lastpos = max(lastpos, curpos); - if (written > WRITE_BUFFER) { - if (lastpos > begin) { + cachedend = max(cachedend, curpos); + if (written > writebuffer) { + if (cachedend > cachedstart) { // Now do three things: - // 1) Start writeback of begin..lastpos range + // 1) Start writeback of cachedstart..cachedend range // 2) Drop the already written range (by the previous fadvise call) // 3) Handle nonpagealigned data. - // This is why we double the WRITE_BUFFER; the first time around the + // This is why we double the writebuffer; the first time around the // last (partial) page might be skipped, writeback will start only after // second call; the third call will still include this page and finally // drop it from cache. - off_t headdrop = min(begin, WRITE_BUFFER * 2L); - posix_fadvise(fd, begin - headdrop, lastpos - begin + headdrop, POSIX_FADV_DONTNEED); + off_t headdrop = min(cachedstart, (off_t)writebuffer * 2); + posix_fadvise(fd, cachedstart - headdrop, cachedend - cachedstart + headdrop, POSIX_FADV_DONTNEED); } - begin = lastpos = curpos; + cachedstart = cachedend = curpos; totwritten += written; written = 0; // The above fadvise() works when writing slowly (recording), but could @@ -1216,7 +1274,7 @@ ssize_t cUnbufferedFile::Write(const void *Data, size_t Size) // So we do another round of flushing, just like above, but at larger // intervals -- this should catch any pages that couldn't be released // earlier. - if (totwritten > MEGABYTE(32)) { + if (totwritten > MEGABYTE(32) + writebuffer ) { // It seems in some setups, fadvise() does not trigger any I/O and // a fdatasync() call would be required do all the work (reiserfs with some // kind of write gathering enabled), but the syncs cause (io) load.. @@ -1234,6 +1292,26 @@ ssize_t cUnbufferedFile::Write(const void *Data, size_t Size) return -1; }
+ssize_t cUnbufferedFile::Write(const void *Data, size_t Size) +{ + if (!wbuf) { + wbuf_chunk = cutting?MEGABYTE(8):MEGABYTE(4); + wbuf = MALLOC(uchar,wbuf_chunk); + if (!wbuf) + return WriteBuf(Data, Size); + wbuf_len = 0; + } + if (Size <= wbuf_chunk-wbuf_len) { + memcpy(wbuf+wbuf_len, Data, Size); + wbuf_len += Size; + } else { + WriteBuf(wbuf, wbuf_len); + memcpy(wbuf, Data, Size); + wbuf_len = Size; + } + return Size; +} + cUnbufferedFile *cUnbufferedFile::Create(const char *FileName, int Flags, mode_t Mode) { cUnbufferedFile *File = new cUnbufferedFile; diff --git a/tools.h b/tools.h index 1f71ec4..ce7283c 100644 --- a/tools.h +++ b/tools.h @@ -246,19 +246,26 @@ private: off_t curpos; off_t cachedstart; off_t cachedend; - off_t begin; off_t lastpos; - off_t ahead; + off_t lastjump; size_t readahead; + uchar *wbuf; + int wbuf_len; + int wbuf_chunk; size_t written; size_t totwritten; + int cutting; + size_t writebuffer; int FadviseDrop(off_t Offset, off_t Len); + int FadviseRead(off_t Offset, off_t Len); + ssize_t WriteBuf(const void *Data, size_t Size); public: cUnbufferedFile(void); ~cUnbufferedFile(); int Open(const char *FileName, int Flags, mode_t Mode = DEFFILEMODE); int Close(void); - void SetReadAhead(size_t ra); + void CuttingSrc(void); + void CuttingDst(void); off_t Seek(off_t Offset, int Whence); ssize_t Read(void *Data, size_t Size); ssize_t Write(const void *Data, size_t Size);
On Tuesday 01 April 2008 01:05:37 Artur Skawina wrote:
Alexw wrote:
Sometimes the player stops in the middle of a recording due to a zero size request. Here is the log:
vdr: [3836] dvbplayer thread started (pid=3643, tid=3836) vdr: [3836] resuming replay at index 1950 (0:01:18.01) vdr: [3837] non blocking file reader thread started (pid=3643, tid=3837) vdr: [3836] SetBrokenLink: no GOP header found in video packet vdr: [3836] setting audio track to 1 (0) vdr: [3836] playing '/var/vdr/video/SERVER/recording/2008-03-28.18.58.50.50.rec/001.vdr'
<<<unexpect stop of replay>>>
vdr: [3837] non blocking file reader thread ended (pid=3643, tid=3837) vdr: [3836] dvbplayer thread ended (pid=3643, tid=3836)
vdr: [5618] WANT: fd: 25 1068536495 .. 1068722913 SIZE: 186418 vdr: [5618] READ: fd: 25 1068536495 .. 1068666704 SIZE: 130209 jump: 0 ra: 12582912 vdr: [5618] WANT: fd: 25 1068666704 .. 1068983331 SIZE: 316627 vdr: [5618] READ: fd: 25 1068666704 .. 1068680058 SIZE: 13354 jump: 0 ra: 12582912 vdr: [5618] READ: fd: 25 1068680058 .. 1068690344 SIZE: 10286 jump: 0 ra: 12582912 vdr: [5618] READ: fd: 25 1068690344 .. 1068721839 SIZE: 31495 jump: 0 ra: 12582912 vdr: [5618] READ: fd: 25 1068721839 .. 1069246127 SIZE: 524288 jump: 0 ra: 12582912 vdr: [5618] WANT: fd: 25 1069246127 .. 1070294703 SIZE: 1048576 vdr: [5618] READ: fd: 25 1069246127 .. 1069246127 SIZE: 0 jump: 0 ra: 12582912 vdr: [5618] non blocking file reader thread ended (pid=5563, tid=5618) vdr: [5617] dvbplayer thread ended (pid=5563, tid=5617)
Weird, cUnbufferedFile::Read(Size=0). I'll try to reproduce this.
Sometimes it take a long time to occur, sometimes not.
Did this start after applying my patch, or did it happen in the past too? Does it always happen at a certain position? Specific stream or bitrate? I don't recall ever having a similar problem, the number 524288 looks a bit suspicious...
It has been happening in the past. It is not related to the position in the stream nor to the bitrate. It seems to be related to the fact the dvbplayer is starving at this time. I am suspicious about blocking IO read somewhere.
I did look at the requested read size and notice that big chunk request are linked to the playback freeze.
vdr: [23223] **** trigger read with 23176 vdr: [23223] **** trigger read with 52353 vdr: [23223] **** trigger read with 21065 vdr: [23223] **** trigger read with 22996 vdr: [23223] **** trigger read with 51949 vdr: [23223] **** trigger read with 27058 vdr: [23223] **** trigger read with 24513 vdr: [23223] **** trigger read with 50280 vdr: [23223] **** trigger read with 18108 vdr: [23223] **** trigger read with 20406 vdr: [23223] **** trigger read with 101168 <-- picture freeze vdr: [23223] **** trigger read with 20742 vdr: [23223] **** trigger read with 20922 vdr: [23223] **** trigger read with 49968 vdr: [23223] **** trigger read with 21879 vdr: [23223] **** trigger read with 21630 vdr: [23223] **** trigger read with 56692 vdr: [23223] **** trigger read with 24832 vdr: [23223] **** trigger read with 23184 vdr: [23223] **** trigger read with 59373 vdr: [23223] **** trigger read with 24832 vdr: [23223] **** trigger read with 25949 vdr: [23223] **** trigger read with 62326 vdr: [23223] **** trigger read with 27614 vdr: [23223] **** trigger read with 25029 vdr: [23223] **** trigger read with 56620 vdr: [23223] **** trigger read with 27066 vdr: [23223] **** trigger read with 22845 vdr: [23223] **** trigger read with 112627 <-- picture freeze vdr: [23223] **** trigger read with 29840
As you can see the requested size is increasing until it reaches the max buf. This is also a period with freezes in the video (late delivery).
Do these problems (0-sized reads) occur only near the end of a program being recorded?
No, you can experience a stop in the middle of a recording.
Also, I see from the above that the readahead code needs to be more aggressive:
vdr: [5627] WANT: fd: 25 1188531493 .. 1188861741 SIZE: 330248
[... small reads...]
vdr: [5627] READ: fd: 25 1188616808 .. 1189141096 SIZE: 524288 jump: 0 ra: 12582912
the readahead window does not cover the area which is being read later -- this certainly is likely to stall playback. I'll fix this (i did not expect such a large difference in read request sizes.)
The attached patch makes the readahead window grow much faster, this will cause more I/O at the start of playback, but should handle cases like the one above better. If it works correctly all the ranges mentioned in "READ:" lines should be inside the preceding "WANT:" range and the playback shouldn't stall.
I am reaching the maximum readahead window faster (almost 10s after playback starts) with the new patch.
vdr: [15502] WANT: fd: 26 1004377827 .. 1015728543 SIZE: 11350716 vdr: [15502] WANT: fd: 26 1004535337 .. 1015998967 SIZE: 11463630 vdr: [15502] WANT: fd: 26 1004682706 .. 1016274903 SIZE: 11592197 vdr: [15502] WANT: fd: 26 1004739635 .. 1016553335 SIZE: 11813700 vdr: [15502] WANT: fd: 26 1004839932 .. 1016943423 SIZE: 12103491 vdr: [15502] WANT: fd: 26 1004970331 .. 1017333735 SIZE: 12363404 vdr: [15502] WANT: fd: 26 1005150982 .. 1017733894 SIZE: 12582912 vdr: [15502] WANT: fd: 26 1005505020 .. 1018075966 SIZE: 12570946 vdr: [15502] WANT: fd: 26 1005797742 .. 1018344142 SIZE: 12546400 vdr: [15502] WANT: fd: 26 1006185614 .. 1018631686 SIZE: 12446072 vdr: [15502] WANT: fd: 26 1006351675 .. 1018919902 SIZE: 12568227 vdr: [15502] WANT: fd: 26 1006677271 .. 1019215894 SIZE: 12538623 vdr: [15502] WANT: fd: 26 1007017481 .. 1019548174 SIZE: 12530693 vdr: [15502] WANT: fd: 26 1007229411 .. 1019812323 SIZE: 12582912 vdr: [15502] WANT: fd: 26 1007530282 .. 1020077003 SIZE: 12546721 vdr: [15502] WANT: fd: 26 1007792199 .. 1020375111 SIZE: 12582912 vdr: [15502] WANT: fd: 26 1008063657 .. 1020646569 SIZE: 12582912 vdr: [15502] WANT: fd: 26 1008339261 .. 1020922173 SIZE: 12582912 vdr: [15502] WANT: fd: 26 1008611553 .. 1021194465 SIZE: 12582912 vdr: [15502] WANT: fd: 26 1008909033 .. 1021491945 SIZE: 12582912 vdr: [15502] WANT: fd: 26 1009225795 .. 1021808707 SIZE: 12582912 vdr: [15502] WANT: fd: 26 1009559062 .. 1022141974 SIZE: 12582912 vdr: [15502] WANT: fd: 26 1009941453 .. 1022441318 SIZE: 12499865 vdr: [15502] WANT: fd: 26 1010248984 .. 1022802262 SIZE: 12553278
Here the readahead window grows to ~5Mbytes just after starting playback, i still need to check that this is not too fast, doesn't saturate the disk and/or link and cause delays when jumping etc. Tested by playing a few files from an NFS mount, didn't notice any problems so far.
This is also working fine here. No delay when jumping and the start of the playback is OK.
An incremental patch would look like this (the attached one (vs 1.4.7) already includes it):
diff --git a/tools.c b/tools.c index a14f799..e22614f 100644 --- a/tools.c +++ b/tools.c @@ -1186,13 +1186,13 @@ ssize_t cUnbufferedFile::Read(void *Data, size_t Size) // Trigger the readahead IO, but only if we've used at least some of the previously // requested area. This avoids calling fadvise() after every read() call. size_t cachedsize = cachedend - curpos;
size_t ra = cachedsize + Size*2 + (size_t)jumped*1;
size_t ra = cachedsize + Size*8 + (size_t)jumped*1; if (cutting) ra += KILOBYTE(64); ra = min(readahead, ra); // Start I/O if we A) used some of the data or B) can read
sufficiently large new chunk. // (A) is important when starting w/ a small readahead. - if (cachedsize < (ra-ra/4) || cachedsize+KILOBYTE(256) <= ra) + if (cachedsize < (ra-ra/16) || cachedsize+KILOBYTE(256) <= ra) FadviseRead(curpos, ra); } else if (jumped >= 0) { // either large forward jump, or FF (jumps by ~4xSize)
artur
Client's log extract (startup):
vdr: [25486] READ: fd: 27 803118549 .. 803146397 SIZE: 27848 jump: 0 ra: 12582912 vdr: [25486] READ: fd: 27 803146397 .. 803191530 SIZE: 45133 jump: 0 ra: 12582912 vdr: [25486] WANT: fd: 27 803191530 .. 808584629 SIZE: 5393099 vdr: [25486] READ: fd: 27 803191530 .. 803212652 SIZE: 21122 jump: 0 ra: 12582912 vdr: [25486] READ: fd: 27 803212652 .. 803234315 SIZE: 21663 jump: 0 ra: 12582912 vdr: [25486] READ: fd: 27 803234315 .. 803289817 SIZE: 55502 jump: 0 ra: 12582912 vdr: [25486] WANT: fd: 27 803289817 .. 809028645 SIZE: 5738828 vdr: [25486] READ: fd: 27 803289817 .. 803312817 SIZE: 23000 jump: 0 ra: 12582912 vdr: [25486] READ: fd: 27 803312817 .. 803334701 SIZE: 21884 jump: 0 ra: 12582912 vdr: [25486] READ: fd: 27 803334701 .. 803409850 SIZE: 75149 jump: 0 ra: 12582912 vdr: [25486] WANT: fd: 27 803409850 .. 809629837 SIZE: 6219987 vdr: [25486] READ: fd: 27 803409850 .. 803430400 SIZE: 20550 jump: 0 ra: 12582912 vdr: [25486] READ: fd: 27 803430400 .. 803450775 SIZE: 20375 jump: 0 ra: 12582912 vdr: [25486] READ: fd: 27 803450775 .. 803495691 SIZE: 44916 jump: 0 ra: 12582912 vdr: [25486] WANT: fd: 27 803495691 .. 809989165 SIZE: 6493474 vdr: [25486] READ: fd: 27 803495691 .. 803516928 SIZE: 21237 jump: 0 ra: 12582912 vdr: [25486] READ: fd: 27 803516928 .. 803560475 SIZE: 43547 jump: 0 ra: 12582912 vdr: [25486] WANT: fd: 27 803560475 .. 810337541 SIZE: 6777066 vdr: [25486] READ: fd: 27 803560475 .. 803579793 SIZE: 19318 jump: 0 ra: 12582912 vdr: [25486] READ: fd: 27 803579793 .. 803626190 SIZE: 46397 jump: 0 ra: 12582912 vdr: [25486] WANT: fd: 27 803626190 .. 810708717 SIZE: 7082527 vdr: [25486] READ: fd: 27 803626190 .. 803647444 SIZE: 21254 jump: 0 ra: 12582912 vdr: [25486] READ: fd: 27 803647444 .. 803691436 SIZE: 43992 jump: 0 ra: 12582912 vdr: [25486] WANT: fd: 27 803691436 .. 811060653 SIZE: 7369217 vdr: [25486] READ: fd: 27 803691436 .. 803711102 SIZE: 19666 jump: 0 ra: 12582912 vdr: [25486] READ: fd: 27 803711102 .. 803753355 SIZE: 42253 jump: 0 ra: 12582912 vdr: [25486] WANT: fd: 27 803753355 .. 811398677 SIZE: 7645322 vdr: [25486] READ: fd: 27 803753355 .. 803772657 SIZE: 19302 jump: 0 ra: 12582912 vdr: [25486] READ: fd: 27 803772657 .. 803829583 SIZE: 56926 jump: 0 ra: 12582912 vdr: [25486] WANT: fd: 27 803829583 .. 811854085 SIZE: 8024502 vdr: [25486] READ: fd: 27 803829583 .. 803852980 SIZE: 23397 jump: 0 ra: 12582912 vdr: [25486] READ: fd: 27 803852980 .. 803877264 SIZE: 24284 jump: 0 ra: 12582912 vdr: [25486] READ: fd: 27 803877264 .. 803931256 SIZE: 53992 jump: 0 ra: 12582912 vdr: [25486] WANT: fd: 27 803931256 .. 812286021 SIZE: 8354765 vdr: [25486] READ: fd: 27 803931256 .. 803953880 SIZE: 22624 jump: 0 ra: 12582912 vdr: [25486] READ: fd: 27 803953880 .. 803975350 SIZE: 21470 jump: 0 ra: 12582912
Is it possible to log the status of the mpeg2 decoder feeder?
Cheers,
Alex
This kind of "problems" I had too.. When I report that playback is going faster than recording. I.e. you start recording and after awhile (maybe only 10 second) you start playback of that recording. Because playback is faster that "real life" you reach (in some poin of time) place where you are 0 (zero) seconds behind on going recording. AND in that point your playback start freeze and jump loop.
On Tue, Apr 1, 2008 at 7:44 AM, JJussi linux-dvb@jjussi.com wrote:
recording. Because playback is faster that "real life" you reach (in some poin of time) place where you are 0 (zero) seconds behind on going recording. AND in that point your playback start freeze and jump loop.
Playback is faster then watching live tv?! Maybe if you skip past the commercials or something! I've started playing back a recording many times before it was finished and never had a problem with lockup or that the playback was going faster then live tv!
I guess what Jjusi is explaining is that some Broadcasters (wrong frame/sec) or in the event of bad reception etc, it can occur that some frames are lost, and/or because vdr plays back from a more reliable source e.g. from disk it will catch up to live tv eventualy. I've experienced this on my dvb-s system.
On 4/2/08, VDR User user.vdr@gmail.com wrote:
On Tue, Apr 1, 2008 at 7:44 AM, JJussi linux-dvb@jjussi.com wrote:
recording. Because playback is faster that "real life" you reach (in
some
poin of time) place where you are 0 (zero) seconds behind on going
recording.
AND in that point your playback start freeze and jump loop.
Playback is faster then watching live tv?! Maybe if you skip past the commercials or something! I've started playing back a recording many times before it was finished and never had a problem with lockup or that the playback was going faster then live tv!
vdr mailing list vdr@linuxtv.org http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr
On Wednesday, 2. Aprilta 2008 05:11:37 VDR User wrote:
Playback is faster then watching live tv?! Maybe if you skip past the commercials or something! I've started playing back a recording many times before it was finished and never had a problem with lockup or that the playback was going faster then live tv!
Yes.. Check that attached message OR search subtitle "Playback to fast.." from archives. Clements Kirchgatterer said that, that's normal...
JJussi wrote:
On Wednesday, 2. Aprilta 2008 05:11:37 VDR User wrote:
Playback is faster then watching live tv?! Maybe if you skip past the commercials or something! I've started playing back a recording many times before it was finished and never had a problem with lockup or that the playback was going faster then live tv!
Yes.. Check that attached message OR search subtitle "Playback to fast.." from archives. Clements Kirchgatterer said that, that's normal...
Small drift is expected -- the clock used as reference for playback (on dvb/dxr card etc) is not synced to the one used by the broadcaster. It's a similar problem to trying to record audio using one sound card and playing it back using another card in real time -- at some point you will get either over- or underruns because the sample rates are not 100% identical.
I suspect some of the issues when playing ongoing recordings just after they are extended are caused by the recorder telling the system to drop the recently written data and this is interfering with the readahead done by the player, which then has to wait for the data to be refetched from disk. O_DIRECT helps here (by not using the cache at all), OTOH it could cause playback problems at the very end (due to the small delay and larger chunks used when extending the file)
artur
This must be dependent on what time during the recording you start playback. Like before X then no problem, but after X then you have the problem. I don't think I ever started playing back a current recording before it was already half finished.