1. 04 Feb, 2012 2 commits
  2. 12 Jan, 2012 4 commits
  3. 15 Dec, 2011 2 commits
  4. 01 Dec, 2011 2 commits
  5. 17 Nov, 2011 1 commit
  6. 09 Nov, 2011 5 commits
  7. 08 Nov, 2011 3 commits
  8. 26 Oct, 2011 2 commits
    • Peter Watkins's avatar
      Added comments. · b551e05e
      Peter Watkins authored
      b551e05e
    • Christian Beier's avatar
      Fix deadlock in threaded mode when using nested rfbClientIteratorNext() calls. · 3df7537a
      Christian Beier authored
      Lengthy explanation follows...
      
      First, the scenario before this patch:
      
      We have three clients 1,2,3 connected. The main thread loops through
      them using rfbClientIteratorNext() (loop L1) and is currently at
      client 2 i.e. client 2's cl_2->refCount is 1. At this point we need to
      loop again through the clients, with cl_2->refCount == 1, i.e. do a
      loop L2 nested within loop L1.
      
      BUT: Now client 2 disconnects, it's clientInput thread terminates its
      clientOutput thread and calls rfbClientConnectionGone(). This LOCKs
      clientListMutex and WAITs for cl_2->refCount to become 0. This means
      this thread waits for the main thread to release cl_2. Waiting, with
      clientListMutex LOCKed!
      
      Meanwhile, the main thread is about to begin the inner
      rfbClientIteratorNext() loop L2. The first call to rfbClientIteratorNext()
      LOCKs clientListMutex. BAAM. This mutex is locked by cl2's clientInput
      thread and is only released when cl_2->refCount becomes 0. The main thread
      would decrement cl_2->refCount when it would continue with loop L1. But
      it's waiting for cl2's clientInput thread to release clientListMutex. Which
      never happens since this one's waiting for the main thread to decrement
      cl_2->refCount. DEADLOCK.
      
      Now, situation with this patch:
      
      Same as above, but when client 2 disconnects it's clientInput thread
      rfbClientConnectionGone(). This again LOCKs clientListMutex, removes cl_2
      from the linked list and UNLOCKS clientListMutex. The WAIT for
      cl_2->refCount to become 0 is _after_ that. Waiting, with
      clientListMutex UNLOCKed!
      
      Therefore, the main thread can continue, do the inner loop L2 (now only
      looping through 1,3 - 2 was removed from the linked list) and continue with
      loop L1, finally decrementing cl_2->refCount, allowing cl2's clientInput
      thread to continue and terminate. The resources held by cl2 are not free()'d
      by rfbClientConnectionGone until cl2->refCount becomes 0, i.e. loop L1 has
      released cl2.
      3df7537a
  9. 17 Oct, 2011 2 commits
    • Johannes Schindelin's avatar
      Update AUTHORS · e3b8aaab
      Johannes Schindelin authored
      Signed-off-by: 's avatarJohannes Schindelin <johannes.schindelin@gmx.de>
      e3b8aaab
    • George Fleury's avatar
      Fix memory leak · fba4818a
      George Fleury authored
      I was debbuging some code tonight and i found a pointer that is not been
      freed, so i think there is maybe a memory leak, so it is...
      
      there is the malloc caller reverse order:
      
      ( malloc cl->statEncList )
      	<- rfbStatLookupEncoding
      	<- rfbStatRecordEncodingSent
      	<- rfbSendCursorPos
      	<- rfbSendFramebufferUpdate
      	<- rfbProcessEvents
      
      I didnt look the whole libvncserver api, but i am using
      rfbReverseConnection with rfbProcessEvents, and then when the client
      connection dies, i am calling a rfbShutdownServer and rfbScreenCleanup,
      but the malloc at rfbStatLookupEncoding isnt been freed.
      
      So to free the stats i added a rfbResetStats(cl) after rfbPrintStats(cl)
      at rfbClientConnectionGone in rfbserver.c before free the cl pointer. (at
      rfbserver.c line 555). And this, obviously, is correcting the memory leak.
      Signed-off-by: 's avatarJohannes Schindelin <johannes.schindelin@gmx.de>
      fba4818a
  10. 12 Oct, 2011 1 commit
  11. 10 Oct, 2011 1 commit
  12. 06 Oct, 2011 2 commits
  13. 04 Oct, 2011 3 commits
  14. 22 Sep, 2011 1 commit
  15. 21 Sep, 2011 1 commit
  16. 20 Sep, 2011 3 commits
  17. 19 Sep, 2011 5 commits