As already announced, we're continuing the planning for this year's media subsystem workshop.
To avoid overriding the main ML with workshop-specifics, a new ML was created: workshop-2011@linuxtv.org
I'll also be updating the event page at: http://www.linuxtv.org/events.php
Over the one-year period, we had 242 developers contributing to the subsystem. Thank you all for that! Unfortunately, the space there is limited, and we can't affort to have all developers there.
Due to that some criteria needed to be applied to create a short list of people that were invited today to participate.
The main criteria were to select the developers that did significant contributions for the media subsystem over the last 1 year period, measured in terms of number of commits and changed lines to the kernel drivers/media tree.
As the used criteria were the number of kernel patches, userspace-only developers weren't included on the invitations. It would be great to have there open source application developers as well, in order to allow us to tune what's needed from applications point of view.
So, if you're leading the development of some V4L and/or DVB open-source application and wants to be there, or you think you can give good contributions for helping to improve the subsystem, please feel free to send us an email.
With regards to the themes, we're received, up to now, the following proposals:
---------------------------------------------------------+---------------------- THEME | Proposed-by: ---------------------------------------------------------+---------------------- Buffer management: snapshot mode | Guennadi Rotation in webcams in tablets while streaming is active | Hans de Goede V4L2 Spec – ambiguities fix | Hans Verkuil V4L2 compliance test results | Hans Verkuil Media Controller presentation (probably for Wed, 25) | Laurent Pinchart Workshop summary presentation on Wed, 25 | Mauro Carvalho Chehab ---------------------------------------------------------+----------------------
From my side, I also have the following proposals:
1) DVB API consistency - what to do with the audio and video DVB API's that conflict with V4L2 and (somewhat) with ALSA?
2) Multi FE support - How should we handle a frontend with multiple delivery systems like DRX-K frontend?
3) videobuf2 - migration plans for legacy drivers
4) NEC IR decoding - how should we handle 32, 24, and 16 bit protocol variations?
Even if you won't be there, please feel free to propose themes for discussion, in order to help us to improve even more the subsystem.
Thank you! Mauro
Le mercredi 3 août 2011 20:21:05 Mauro Carvalho Chehab, vous avez écrit :
I'll also be updating the event page at: http://www.linuxtv.org/events.php
There's no Wednesday 25 in October 2011.
Em 03-08-2011 14:34, Rémi Denis-Courmont escreveu:
Le mercredi 3 août 2011 20:21:05 Mauro Carvalho Chehab, vous avez écrit :
I'll also be updating the event page at: http://www.linuxtv.org/events.php
There's no Wednesday 25 in October 2011.
Thanks for noticing it! I meant to say Tuesday Oct, 25.
The original schedule for the KS were shifted by one day, in order to avoid conflicts with the other Linux events that will happen there. So, the last day that were originally scheduled for Wed were moved to Tue.
Thanks, Mauro
Em 03-08-2011 14:21, Mauro Carvalho Chehab escreveu:
As already announced, we're continuing the planning for this year's media subsystem workshop.
To avoid overriding the main ML with workshop-specifics, a new ML was created: workshop-2011@linuxtv.org
I'll also be updating the event page at: http://www.linuxtv.org/events.php
Over the one-year period, we had 242 developers contributing to the subsystem. Thank you all for that! Unfortunately, the space there is limited, and we can't affort to have all developers there.
Due to that some criteria needed to be applied to create a short list of people that were invited today to participate.
The main criteria were to select the developers that did significant contributions for the media subsystem over the last 1 year period, measured in terms of number of commits and changed lines to the kernel drivers/media tree.
As the used criteria were the number of kernel patches, userspace-only developers weren't included on the invitations. It would be great to have there open source application developers as well, in order to allow us to tune what's needed from applications point of view.
So, if you're leading the development of some V4L and/or DVB open-source application and wants to be there, or you think you can give good contributions for helping to improve the subsystem, please feel free to send us an email.
With regards to the themes, we're received, up to now, the following proposals:
---------------------------------------------------------+---------------------- THEME | Proposed-by: ---------------------------------------------------------+---------------------- Buffer management: snapshot mode | Guennadi Rotation in webcams in tablets while streaming is active | Hans de Goede V4L2 Spec – ambiguities fix | Hans Verkuil V4L2 compliance test results | Hans Verkuil Media Controller presentation (probably for Wed, 25) | Laurent Pinchart Workshop summary presentation on Wed, 25 | Mauro Carvalho Chehab
In time: it should be, instead Tue Oct, 25. Sorry for the typo.
---------------------------------------------------------+----------------------
From my side, I also have the following proposals:
- DVB API consistency - what to do with the audio and video DVB API's
that conflict with V4L2 and (somewhat) with ALSA?
- Multi FE support - How should we handle a frontend with multiple
delivery systems like DRX-K frontend?
videobuf2 - migration plans for legacy drivers
NEC IR decoding - how should we handle 32, 24, and 16 bit protocol
variations?
Even if you won't be there, please feel free to propose themes for discussion, in order to help us to improve even more the subsystem.
Thank you! Mauro
Rémi, thanks for pointing it!
Thanks! Mauro
On Wednesday, August 03, 2011 19:45:36 Mauro Carvalho Chehab wrote:
Em 03-08-2011 14:21, Mauro Carvalho Chehab escreveu:
As already announced, we're continuing the planning for this year's media subsystem workshop.
To avoid overriding the main ML with workshop-specifics, a new ML was created: workshop-2011@linuxtv.org
I'll also be updating the event page at: http://www.linuxtv.org/events.php
Over the one-year period, we had 242 developers contributing to the subsystem. Thank you all for that! Unfortunately, the space there is limited, and we can't affort to have all developers there.
Due to that some criteria needed to be applied to create a short list of people that were invited today to participate.
The main criteria were to select the developers that did significant contributions for the media subsystem over the last 1 year period, measured in terms of number of commits and changed lines to the kernel drivers/media tree.
As the used criteria were the number of kernel patches, userspace-only developers weren't included on the invitations. It would be great to have there open source application developers as well, in order to allow us to tune what's needed from applications point of view.
So, if you're leading the development of some V4L and/or DVB open-source application and wants to be there, or you think you can give good contributions for helping to improve the subsystem, please feel free to send us an email.
With regards to the themes, we're received, up to now, the following proposals:
---------------------------------------------------------+---------------------- THEME | Proposed-by: ---------------------------------------------------------+---------------------- Buffer management: snapshot mode | Guennadi Rotation in webcams in tablets while streaming is active | Hans de Goede V4L2 Spec – ambiguities fix | Hans Verkuil V4L2 compliance test results | Hans Verkuil Media Controller presentation (probably for Wed, 25) | Laurent Pinchart Workshop summary presentation on Wed, 25 | Mauro Carvalho Chehab
In time: it should be, instead Tue Oct, 25. Sorry for the typo.
So the presentation and summary are on Tuesday, but when is the workshop itself? Is it on the Monday or the Sunday?
It would be nice to know so I can plan my stay in Prague and my planning with the other conferences going on at the same time.
Regards,
Hans
---------------------------------------------------------+----------------------
From my side, I also have the following proposals:
- DVB API consistency - what to do with the audio and video DVB API's
that conflict with V4L2 and (somewhat) with ALSA?
- Multi FE support - How should we handle a frontend with multiple
delivery systems like DRX-K frontend?
videobuf2 - migration plans for legacy drivers
NEC IR decoding - how should we handle 32, 24, and 16 bit protocol
variations?
Even if you won't be there, please feel free to propose themes for discussion, in order to help us to improve even more the subsystem.
Thank you! Mauro
Rémi, thanks for pointing it!
Thanks! Mauro -- To unsubscribe from this list: send the line "unsubscribe linux-media" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Em 08-08-2011 03:22, Hans Verkuil escreveu:
On Wednesday, August 03, 2011 19:45:36 Mauro Carvalho Chehab wrote:
Em 03-08-2011 14:21, Mauro Carvalho Chehab escreveu:
As already announced, we're continuing the planning for this year's media subsystem workshop.
To avoid overriding the main ML with workshop-specifics, a new ML was created: workshop-2011@linuxtv.org
I'll also be updating the event page at: http://www.linuxtv.org/events.php
Over the one-year period, we had 242 developers contributing to the subsystem. Thank you all for that! Unfortunately, the space there is limited, and we can't affort to have all developers there.
Due to that some criteria needed to be applied to create a short list of people that were invited today to participate.
The main criteria were to select the developers that did significant contributions for the media subsystem over the last 1 year period, measured in terms of number of commits and changed lines to the kernel drivers/media tree.
As the used criteria were the number of kernel patches, userspace-only developers weren't included on the invitations. It would be great to have there open source application developers as well, in order to allow us to tune what's needed from applications point of view.
So, if you're leading the development of some V4L and/or DVB open-source application and wants to be there, or you think you can give good contributions for helping to improve the subsystem, please feel free to send us an email.
With regards to the themes, we're received, up to now, the following proposals:
---------------------------------------------------------+---------------------- THEME | Proposed-by: ---------------------------------------------------------+---------------------- Buffer management: snapshot mode | Guennadi Rotation in webcams in tablets while streaming is active | Hans de Goede V4L2 Spec – ambiguities fix | Hans Verkuil V4L2 compliance test results | Hans Verkuil Media Controller presentation (probably for Wed, 25) | Laurent Pinchart Workshop summary presentation on Wed, 25 | Mauro Carvalho Chehab
In time: it should be, instead Tue Oct, 25. Sorry for the typo.
So the presentation and summary are on Tuesday, but when is the workshop itself? Is it on the Monday or the Sunday?
It would be nice to know so I can plan my stay in Prague and my planning with the other conferences going on at the same time.
The workshop itself will be on Sunday, and the presentations on Tuesday. Monday will be for KS/2011 only invitees. The LinuxCon and ELC Europe will start on Wed.
The change for the workshop to start on Sunday were made to allow people to better participate at the LinuxCon and ELCE.
Regards, Mauro.
On Monday, August 08, 2011 15:25:26 Mauro Carvalho Chehab wrote:
Em 08-08-2011 03:22, Hans Verkuil escreveu:
On Wednesday, August 03, 2011 19:45:36 Mauro Carvalho Chehab wrote:
Em 03-08-2011 14:21, Mauro Carvalho Chehab escreveu:
As already announced, we're continuing the planning for this year's media subsystem workshop.
To avoid overriding the main ML with workshop-specifics, a new ML was created: workshop-2011@linuxtv.org
I'll also be updating the event page at: http://www.linuxtv.org/events.php
Over the one-year period, we had 242 developers contributing to the subsystem. Thank you all for that! Unfortunately, the space there is limited, and we can't affort to have all developers there.
Due to that some criteria needed to be applied to create a short list of people that were invited today to participate.
The main criteria were to select the developers that did significant contributions for the media subsystem over the last 1 year period, measured in terms of number of commits and changed lines to the kernel drivers/media tree.
As the used criteria were the number of kernel patches, userspace-only developers weren't included on the invitations. It would be great to have there open source application developers as well, in order to allow us to tune what's needed from applications point of view.
So, if you're leading the development of some V4L and/or DVB open-source application and wants to be there, or you think you can give good contributions for helping to improve the subsystem, please feel free to send us an email.
With regards to the themes, we're received, up to now, the following proposals:
---------------------------------------------------------+---------------------- THEME | Proposed-by: ---------------------------------------------------------+---------------------- Buffer management: snapshot mode | Guennadi Rotation in webcams in tablets while streaming is active | Hans de Goede V4L2 Spec – ambiguities fix | Hans Verkuil V4L2 compliance test results | Hans Verkuil Media Controller presentation (probably for Wed, 25) | Laurent Pinchart Workshop summary presentation on Wed, 25 | Mauro Carvalho Chehab
In time: it should be, instead Tue Oct, 25. Sorry for the typo.
So the presentation and summary are on Tuesday, but when is the workshop itself? Is it on the Monday or the Sunday?
It would be nice to know so I can plan my stay in Prague and my planning with the other conferences going on at the same time.
The workshop itself will be on Sunday, and the presentations on Tuesday. Monday will be for KS/2011 only invitees. The LinuxCon and ELC Europe will start on Wed.
Ah, that's good to know. Thank you for the information!
The GStreamer conference is on Monday and Tuesday so I'll be busy from Sunday to Friday. That's going to be one busy week :-)
Regards,
Hans
The change for the workshop to start on Sunday were made to allow people to better participate at the LinuxCon and ELCE.
Regards, Mauro.
Le lundi 8 août 2011 16:25:26 Mauro Carvalho Chehab, vous avez écrit :
So the presentation and summary are on Tuesday, but when is the workshop itself? Is it on the Monday or the Sunday?
It would be nice to know so I can plan my stay in Prague and my planning with the other conferences going on at the same time.
The workshop itself will be on Sunday, and the presentations on Tuesday. Monday will be for KS/2011 only invitees. The LinuxCon and ELC Europe will start on Wed.
So the workshop is only Sunday, is that right? Is it tied to any of the registration fees (LinuxCon is steep if you are not sponsored nor studying)?
Em 11-08-2011 14:49, Rémi Denis-Courmont escreveu:
Le lundi 8 août 2011 16:25:26 Mauro Carvalho Chehab, vous avez écrit :
So the presentation and summary are on Tuesday, but when is the workshop itself? Is it on the Monday or the Sunday?
It would be nice to know so I can plan my stay in Prague and my planning with the other conferences going on at the same time.
The workshop itself will be on Sunday, and the presentations on Tuesday. Monday will be for KS/2011 only invitees. The LinuxCon and ELC Europe will start on Wed.
So the workshop is only Sunday, is that right?
Sunday and Tuesday. The discussions will happen on Sunday. On Tuesday, we'll have the opportunity to exchange some information with the other people from KS and from the other workshops.
As Monday will be free for most people, it probably makes sense to organize some informal meetings there for the ones that won't be at the KS only day.
Is it tied to any of the registration fees (LinuxCon is steep if you are not sponsored nor studying)?
No, but it requires an invitation, and I need to pass the names of the participants to KS organizers.
So, please let me know if you intend to be there, for me to send you an invitation.
Thanks, Mauro
Hello,
Le jeudi 11 août 2011 22:00:19 Mauro Carvalho Chehab, vous avez écrit :
So the workshop is only Sunday, is that right?
Sunday and Tuesday. The discussions will happen on Sunday. On Tuesday, we'll have the opportunity to exchange some information with the other people from KS and from the other workshops.
As Monday will be free for most people, it probably makes sense to organize some informal meetings there for the ones that won't be at the KS only day.
Is it tied to any of the registration fees (LinuxCon is steep if you are not sponsored nor studying)?
No, but it requires an invitation, and I need to pass the names of the participants to KS organizers.
So, please let me know if you intend to be there, for me to send you an invitation.
I might be able to come on Sunday if there is still room. Sorry for the delay. I cannot justify the expense for tuesday without employer support.
Best regards,
Em 25-09-2011 16:54, Rémi Denis-Courmont escreveu:
Hello,
Le jeudi 11 août 2011 22:00:19 Mauro Carvalho Chehab, vous avez écrit :
So the workshop is only Sunday, is that right?
Sunday and Tuesday. The discussions will happen on Sunday. On Tuesday, we'll have the opportunity to exchange some information with the other people from KS and from the other workshops.
As Monday will be free for most people, it probably makes sense to organize some informal meetings there for the ones that won't be at the KS only day.
Is it tied to any of the registration fees (LinuxCon is steep if you are not sponsored nor studying)?
No, but it requires an invitation, and I need to pass the names of the participants to KS organizers.
So, please let me know if you intend to be there, for me to send you an invitation.
I might be able to come on Sunday if there is still room. Sorry for the delay. I cannot justify the expense for tuesday without employer support.
Hi Rémi,
Sorry for not answering earlier. The event is full, but we had one person that can't be there anymore, so if you're interested, I may be able to put you on his place. The most important day is Sunday, where most discussions will happen. I've also reserving a room for Monday for extra discussions, among the people that won't be at KS or Gstreamer conf.
Regards, Mauro
Best regards,
On Wed, 05 Oct 2011 15:30:02 -0300, Mauro Carvalho Chehab
mchehab@redhat.com wrote:
Hi Rémi,
Sorry for not answering earlier. The event is full, but we had one
person
that can't be there anymore, so if you're interested, I may be able to
put
you on his place. The most important day is Sunday, where most
discussions
will happen. I've also reserving a room for Monday for extra
discussions,
among the people that won't be at KS or Gstreamer conf.
Ok, I'll come then. Thanks and sorry for the delays again.
(I've registered already)
On Wed, 3 Aug 2011, Mauro Carvalho Chehab wrote:
As already announced, we're continuing the planning for this year's media subsystem workshop.
To avoid overriding the main ML with workshop-specifics, a new ML was created: workshop-2011@linuxtv.org
I'll also be updating the event page at: http://www.linuxtv.org/events.php
Over the one-year period, we had 242 developers contributing to the subsystem. Thank you all for that! Unfortunately, the space there is limited, and we can't affort to have all developers there.
Due to that some criteria needed to be applied to create a short list of people that were invited today to participate.
The main criteria were to select the developers that did significant contributions for the media subsystem over the last 1 year period, measured in terms of number of commits and changed lines to the kernel drivers/media tree.
As the used criteria were the number of kernel patches, userspace-only developers weren't included on the invitations. It would be great to have there open source application developers as well, in order to allow us to tune what's needed from applications point of view.
So, if you're leading the development of some V4L and/or DVB open-source application and wants to be there, or you think you can give good contributions for helping to improve the subsystem, please feel free to send us an email.
With regards to the themes, we're received, up to now, the following proposals:
---------------------------------------------------------+---------------------- THEME | Proposed-by: ---------------------------------------------------------+---------------------- Buffer management: snapshot mode | Guennadi Rotation in webcams in tablets while streaming is active | Hans de Goede V4L2 Spec ? ambiguities fix | Hans Verkuil V4L2 compliance test results | Hans Verkuil Media Controller presentation (probably for Wed, 25) | Laurent Pinchart Workshop summary presentation on Wed, 25 | Mauro Carvalho Chehab ---------------------------------------------------------+----------------------
From my side, I also have the following proposals:
- DVB API consistency - what to do with the audio and video DVB API's
that conflict with V4L2 and (somewhat) with ALSA?
- Multi FE support - How should we handle a frontend with multiple
delivery systems like DRX-K frontend?
videobuf2 - migration plans for legacy drivers
NEC IR decoding - how should we handle 32, 24, and 16 bit protocol
variations?
Even if you won't be there, please feel free to propose themes for discussion, in order to help us to improve even more the subsystem.
Thank you! Mauro
Mauro,
Not saying that you need to change the program for this session to deal with this topic, but an old and vexing problem is dual-mode devices. It is an issue which needs some kind of unified approach, and, in my opinion, consensus about policy and methodology.
As a very good example if this problem, several of the cameras that I have supported as GSPCA devices in their webcam modality are also still cameras and are supported, as still cameras, in Gphoto. This can cause a collision between driver software in userspace which functions with libusb, and on the other hand with a kernel driver which tries to grab the device.
Recent attempts to deal with this problem involve the incorporation of code in libusb which disables a kernel module that has already grabbed the device, allowing the userspace driver to function. This has made life a little bit easier for some people, but not for everybody. For, the device needs to be re-plugged in order to re-activate the kernel support. But some of the "user-friencly" desktop setups used by some distros will automatically start up a dual-mode camera with a gphoto-based program, thereby making it impossible for the camera to be used as a webcam unless the user goes for a crash course in how to disable the "feature" which has been so thoughtfully (thoughtlessly?) provided.
As the problem is not confined to cameras but also affects some other devices, such as DSL modems which have a partition on them and are thus seen as Mass Storage devices, perhaps it is time to try to find a systematic approach to problems like this.
There are of course several possible approaches.
1. A kernel module should handle everything related to connecting up the hardware. In that case, the existing userspace driver has to be modified to use the kernel module instead of libusb. Those who support this option would say that it gets everything under the control of the kernel, where it belongs. OTOG, the possible result is to create a minor mess in projects like Gphoto.
2. The kernel module should be abolished, and all of its functionality moved to userspace. This would of course involve difficulties approximately equivalent to item 1. An advantage, in the eyes of some, would be to cut down on the yet-another-driver-for-yet-another-piece-of-peculiar-hardware syndrome which obviously contributes to an in principle unlimited increase in the size of the kernel codebase. A disadvantage would be that it would create some disruption in webcam support.
3. A further modification to libusb reactivates the kernel module automatically, as soon as the userspace app which wanted to access the device through a libusb-based driver library is closed. This seems attractive, but it has certain deficiencies as well. One of them is that it can not necessarily provide a smooth and informative user experience, since circumstances can occur in which something appears to go wrong, but the user gets no clear message saying what the problem is. In other words, it is a patchwork solution which only slightly refines the current patchwork solution in libusb, which is in itself only a slight improvement on the original, unaddressed problem.
4. ???
Several people are interested in this problem, but not much progress has been made at this time. I think that the topic ought to be put somehow on the front burner so that lots of people will try to think of the best way to handle it. Many eyes, and all that.
Not saying change your schedule, as I said. Have a nice conference. I wish I could attend. But I do hope by this message to raise some general concern about this problem.
Theodore Kilgore
Em 03-08-2011 16:53, Theodore Kilgore escreveu:
On Wed, 3 Aug 2011, Mauro Carvalho Chehab wrote:
As already announced, we're continuing the planning for this year's media subsystem workshop.
To avoid overriding the main ML with workshop-specifics, a new ML was created: workshop-2011@linuxtv.org
I'll also be updating the event page at: http://www.linuxtv.org/events.php
Over the one-year period, we had 242 developers contributing to the subsystem. Thank you all for that! Unfortunately, the space there is limited, and we can't affort to have all developers there.
Due to that some criteria needed to be applied to create a short list of people that were invited today to participate.
The main criteria were to select the developers that did significant contributions for the media subsystem over the last 1 year period, measured in terms of number of commits and changed lines to the kernel drivers/media tree.
As the used criteria were the number of kernel patches, userspace-only developers weren't included on the invitations. It would be great to have there open source application developers as well, in order to allow us to tune what's needed from applications point of view.
So, if you're leading the development of some V4L and/or DVB open-source application and wants to be there, or you think you can give good contributions for helping to improve the subsystem, please feel free to send us an email.
With regards to the themes, we're received, up to now, the following proposals:
---------------------------------------------------------+---------------------- THEME | Proposed-by: ---------------------------------------------------------+---------------------- Buffer management: snapshot mode | Guennadi Rotation in webcams in tablets while streaming is active | Hans de Goede V4L2 Spec ? ambiguities fix | Hans Verkuil V4L2 compliance test results | Hans Verkuil Media Controller presentation (probably for Wed, 25) | Laurent Pinchart Workshop summary presentation on Wed, 25 | Mauro Carvalho Chehab ---------------------------------------------------------+----------------------
From my side, I also have the following proposals:
- DVB API consistency - what to do with the audio and video DVB API's
that conflict with V4L2 and (somewhat) with ALSA?
- Multi FE support - How should we handle a frontend with multiple
delivery systems like DRX-K frontend?
videobuf2 - migration plans for legacy drivers
NEC IR decoding - how should we handle 32, 24, and 16 bit protocol
variations?
Even if you won't be there, please feel free to propose themes for discussion, in order to help us to improve even more the subsystem.
Thank you! Mauro
Mauro,
Not saying that you need to change the program for this session to deal with this topic, but an old and vexing problem is dual-mode devices. It is an issue which needs some kind of unified approach, and, in my opinion, consensus about policy and methodology.
As a very good example if this problem, several of the cameras that I have supported as GSPCA devices in their webcam modality are also still cameras and are supported, as still cameras, in Gphoto. This can cause a collision between driver software in userspace which functions with libusb, and on the other hand with a kernel driver which tries to grab the device.
Recent attempts to deal with this problem involve the incorporation of code in libusb which disables a kernel module that has already grabbed the device, allowing the userspace driver to function. This has made life a little bit easier for some people, but not for everybody. For, the device needs to be re-plugged in order to re-activate the kernel support. But some of the "user-friencly" desktop setups used by some distros will automatically start up a dual-mode camera with a gphoto-based program, thereby making it impossible for the camera to be used as a webcam unless the user goes for a crash course in how to disable the "feature" which has been so thoughtfully (thoughtlessly?) provided.
As the problem is not confined to cameras but also affects some other devices, such as DSL modems which have a partition on them and are thus seen as Mass Storage devices, perhaps it is time to try to find a systematic approach to problems like this.
There are of course several possible approaches.
- A kernel module should handle everything related to connecting up the
hardware. In that case, the existing userspace driver has to be modified to use the kernel module instead of libusb. Those who support this option would say that it gets everything under the control of the kernel, where it belongs. OTOG, the possible result is to create a minor mess in projects like Gphoto.
- The kernel module should be abolished, and all of its functionality
moved to userspace. This would of course involve difficulties approximately equivalent to item 1. An advantage, in the eyes of some, would be to cut down on the yet-another-driver-for-yet-another-piece-of-peculiar-hardware syndrome which obviously contributes to an in principle unlimited increase in the size of the kernel codebase. A disadvantage would be that it would create some disruption in webcam support.
- A further modification to libusb reactivates the kernel module
automatically, as soon as the userspace app which wanted to access the device through a libusb-based driver library is closed. This seems attractive, but it has certain deficiencies as well. One of them is that it can not necessarily provide a smooth and informative user experience, since circumstances can occur in which something appears to go wrong, but the user gets no clear message saying what the problem is. In other words, it is a patchwork solution which only slightly refines the current patchwork solution in libusb, which is in itself only a slight improvement on the original, unaddressed problem.
- ???
Several people are interested in this problem, but not much progress has been made at this time. I think that the topic ought to be put somehow on the front burner so that lots of people will try to think of the best way to handle it. Many eyes, and all that.
Not saying change your schedule, as I said. Have a nice conference. I wish I could attend. But I do hope by this message to raise some general concern about this problem.
That's an interesting issue.
A solution like (3) is a little bit out of scope, as it is a pure userspace (or a mixed userspace USB stack) solution.
Technically speaking, letting the same device being handled by either an userspace or a kernelspace driver doesn't seem smart to me, due to: - Duplicated efforts to maintain both drivers; - It is hard to sync a kernel driver with an userspace driver, as you've pointed.
So, we're between (1) or (2).
Moving the solution entirely to userspace will have, additionally, the problem of having two applications trying to access the same hardware using two different userspace instances (for example, an incoming videoconf call while Gphoto is opened, assuming that such videoconf call would also have an userspace driver).
IMO, the right solution is to work on a proper snapshot mode, in kernelspace, and moving the drivers that have already a kernelspace out of Gphoto.
That's said, there is a proposed topic for snapshot buffer management. Maybe it may cover the remaining needs for taking high quality pictures in Kernel.
The hole idea is to allocate additional buffers for snapshots, imagining that the camera may be streaming in low quality/low resolution, and, once snapshot is requested, it will take one high quality/high resolution picture.
Thanks, Mauro
On Wed, 3 Aug 2011, Mauro Carvalho Chehab wrote:
Em 03-08-2011 16:53, Theodore Kilgore escreveu:
On Wed, 3 Aug 2011, Mauro Carvalho Chehab wrote:
As already announced, we're continuing the planning for this year's media subsystem workshop.
To avoid overriding the main ML with workshop-specifics, a new ML was created: workshop-2011@linuxtv.org
I'll also be updating the event page at: http://www.linuxtv.org/events.php
Over the one-year period, we had 242 developers contributing to the subsystem. Thank you all for that! Unfortunately, the space there is limited, and we can't affort to have all developers there.
Due to that some criteria needed to be applied to create a short list of people that were invited today to participate.
The main criteria were to select the developers that did significant contributions for the media subsystem over the last 1 year period, measured in terms of number of commits and changed lines to the kernel drivers/media tree.
As the used criteria were the number of kernel patches, userspace-only developers weren't included on the invitations. It would be great to have there open source application developers as well, in order to allow us to tune what's needed from applications point of view.
So, if you're leading the development of some V4L and/or DVB open-source application and wants to be there, or you think you can give good contributions for helping to improve the subsystem, please feel free to send us an email.
With regards to the themes, we're received, up to now, the following proposals:
---------------------------------------------------------+---------------------- THEME | Proposed-by: ---------------------------------------------------------+---------------------- Buffer management: snapshot mode | Guennadi Rotation in webcams in tablets while streaming is active | Hans de Goede V4L2 Spec ? ambiguities fix | Hans Verkuil V4L2 compliance test results | Hans Verkuil Media Controller presentation (probably for Wed, 25) | Laurent Pinchart Workshop summary presentation on Wed, 25 | Mauro Carvalho Chehab ---------------------------------------------------------+----------------------
From my side, I also have the following proposals:
- DVB API consistency - what to do with the audio and video DVB API's
that conflict with V4L2 and (somewhat) with ALSA?
- Multi FE support - How should we handle a frontend with multiple
delivery systems like DRX-K frontend?
videobuf2 - migration plans for legacy drivers
NEC IR decoding - how should we handle 32, 24, and 16 bit protocol
variations?
Even if you won't be there, please feel free to propose themes for discussion, in order to help us to improve even more the subsystem.
Thank you! Mauro
Mauro,
Not saying that you need to change the program for this session to deal with this topic, but an old and vexing problem is dual-mode devices. It is an issue which needs some kind of unified approach, and, in my opinion, consensus about policy and methodology.
As a very good example if this problem, several of the cameras that I have supported as GSPCA devices in their webcam modality are also still cameras and are supported, as still cameras, in Gphoto. This can cause a collision between driver software in userspace which functions with libusb, and on the other hand with a kernel driver which tries to grab the device.
Recent attempts to deal with this problem involve the incorporation of code in libusb which disables a kernel module that has already grabbed the device, allowing the userspace driver to function. This has made life a little bit easier for some people, but not for everybody. For, the device needs to be re-plugged in order to re-activate the kernel support. But some of the "user-friencly" desktop setups used by some distros will automatically start up a dual-mode camera with a gphoto-based program, thereby making it impossible for the camera to be used as a webcam unless the user goes for a crash course in how to disable the "feature" which has been so thoughtfully (thoughtlessly?) provided.
As the problem is not confined to cameras but also affects some other devices, such as DSL modems which have a partition on them and are thus seen as Mass Storage devices, perhaps it is time to try to find a systematic approach to problems like this.
There are of course several possible approaches.
- A kernel module should handle everything related to connecting up the
hardware. In that case, the existing userspace driver has to be modified to use the kernel module instead of libusb. Those who support this option would say that it gets everything under the control of the kernel, where it belongs. OTOG, the possible result is to create a minor mess in projects like Gphoto.
- The kernel module should be abolished, and all of its functionality
moved to userspace. This would of course involve difficulties approximately equivalent to item 1. An advantage, in the eyes of some, would be to cut down on the yet-another-driver-for-yet-another-piece-of-peculiar-hardware syndrome which obviously contributes to an in principle unlimited increase in the size of the kernel codebase. A disadvantage would be that it would create some disruption in webcam support.
- A further modification to libusb reactivates the kernel module
automatically, as soon as the userspace app which wanted to access the device through a libusb-based driver library is closed. This seems attractive, but it has certain deficiencies as well. One of them is that it can not necessarily provide a smooth and informative user experience, since circumstances can occur in which something appears to go wrong, but the user gets no clear message saying what the problem is. In other words, it is a patchwork solution which only slightly refines the current patchwork solution in libusb, which is in itself only a slight improvement on the original, unaddressed problem.
- ???
Several people are interested in this problem, but not much progress has been made at this time. I think that the topic ought to be put somehow on the front burner so that lots of people will try to think of the best way to handle it. Many eyes, and all that.
Not saying change your schedule, as I said. Have a nice conference. I wish I could attend. But I do hope by this message to raise some general concern about this problem.
I meant this. Two ways. First, I knew when the conference was announced that it would severely conflict with the schedule of my workplace (right after the start of the academic semester). So I had simply to write off a conference which I really think I would have enjoyed attending. Second, I am hoping to raise general interest in a rather vexing issue. The problem here, in a nutshell, originates from a conflict between user convenience and the Linux security model. Nobody wants to sacrifice either of these. More cleverness is needed.
That's an interesting issue.
Yes.
A solution like (3) is a little bit out of scope, as it is a pure userspace (or a mixed userspace USB stack) solution.
And does not completely solve the problem, either.
Technically speaking, letting the same device being handled by either an userspace or a kernelspace driver doesn't seem smart to me, due to:
- Duplicated efforts to maintain both drivers;
- It is hard to sync a kernel driver with an userspace driver,
as you've pointed.
So, we're between (1) or (2).
Moving the solution entirely to userspace will have, additionally, the problem of having two applications trying to access the same hardware using two different userspace instances (for example, an incoming videoconf call while Gphoto is opened, assuming that such videoconf call would also have an userspace driver).
Yes, that kind of thing is an obvious problem. Actually, though, it may be that this had just better not happen. For some of the hardware that I know of, it could be a real problem no matter what approach would be taken. For example, certain specific dual-mode cameras will delete all data stored on the camera if the camera is fired up in webcam mode. To drop Gphoto suddenly in order to do the videoconf call would, on such cameras, result in the automatic deletion of all photos on the camera even if those photos had not yet been downloaded. Presumably, one would not want to do that.
IMO, the right solution is to work on a proper snapshot mode, in kernelspace, and moving the drivers that have already a kernelspace out of Gphoto.
Well, the problem with that is, a still camera and a webcam are entirely different beasts. Still photos stored in the memory of an external device, waiting to be downloaded, are not snapshots. Thus, access to those still photos is not access to snapshots. Things are not that simple.
That's said, there is a proposed topic for snapshot buffer management. Maybe it may cover the remaining needs for taking high quality pictures in Kernel.
Again, when downloading photo images which are _stored_ on the camera one is not "taking high quality pictures." Different functionality is involved. This may involve, for example, a different Altsetting for the USB device and may also require the use of Bulk transport instead of Isochronous transport.
The hole idea is to allocate additional buffers for snapshots, imagining that the camera may be streaming in low quality/low resolution, and, once snapshot is requested, it will take one high quality/high resolution picture.
The ability to "take" a photo is present on some still cameras and not on others. "Some still cameras" includes some dual-mode cameras. For dual-mode cameras which can be requested to "take" a photo while running in webcam mode, the ability to do so is, generally speaking, present in the kernel driver.
To present the problem more simply, a webcam is, essentially, a device of USB class Video (even if the device uses proprietary protocols, this is at least conceptually true). This is true because a webcam streams video data. However, a still camera is, in its essence as a computer peripheral, a USB mass storage device (even if the device has a proprietary protocol and even if it will not do everything one would expect from a normal mass storage device). That is, a still camera can be considered as a device which contains data, and one needs to get the data from there to the computer, and then to process said data. It is when the two different kinds of device are married together in one piece of physical hardware, with the same USB Vendor:Product code, that trouble follows.
I suggest that we continue this discussion after the conference. I expect that you and several others who I think are interested in this topic are rather busy getting ready for the conference. I also hope that some of those people read this, since I think that a general discussion is needed. The problem will, after all, not go away. It has been with us for years.
Theodore Kilgore
Em 03-08-2011 20:20, Theodore Kilgore escreveu:
On Wed, 3 Aug 2011, Mauro Carvalho Chehab wrote:
Em 03-08-2011 16:53, Theodore Kilgore escreveu:
On Wed, 3 Aug 2011, Mauro Carvalho Chehab wrote:
As already announced, we're continuing the planning for this year's media subsystem workshop.
To avoid overriding the main ML with workshop-specifics, a new ML was created: workshop-2011@linuxtv.org
I'll also be updating the event page at: http://www.linuxtv.org/events.php
Over the one-year period, we had 242 developers contributing to the subsystem. Thank you all for that! Unfortunately, the space there is limited, and we can't affort to have all developers there.
Due to that some criteria needed to be applied to create a short list of people that were invited today to participate.
The main criteria were to select the developers that did significant contributions for the media subsystem over the last 1 year period, measured in terms of number of commits and changed lines to the kernel drivers/media tree.
As the used criteria were the number of kernel patches, userspace-only developers weren't included on the invitations. It would be great to have there open source application developers as well, in order to allow us to tune what's needed from applications point of view.
So, if you're leading the development of some V4L and/or DVB open-source application and wants to be there, or you think you can give good contributions for helping to improve the subsystem, please feel free to send us an email.
With regards to the themes, we're received, up to now, the following proposals:
---------------------------------------------------------+---------------------- THEME | Proposed-by: ---------------------------------------------------------+---------------------- Buffer management: snapshot mode | Guennadi Rotation in webcams in tablets while streaming is active | Hans de Goede V4L2 Spec ? ambiguities fix | Hans Verkuil V4L2 compliance test results | Hans Verkuil Media Controller presentation (probably for Wed, 25) | Laurent Pinchart Workshop summary presentation on Wed, 25 | Mauro Carvalho Chehab ---------------------------------------------------------+----------------------
From my side, I also have the following proposals:
- DVB API consistency - what to do with the audio and video DVB API's
that conflict with V4L2 and (somewhat) with ALSA?
- Multi FE support - How should we handle a frontend with multiple
delivery systems like DRX-K frontend?
videobuf2 - migration plans for legacy drivers
NEC IR decoding - how should we handle 32, 24, and 16 bit protocol
variations?
Even if you won't be there, please feel free to propose themes for discussion, in order to help us to improve even more the subsystem.
Thank you! Mauro
Mauro,
Not saying that you need to change the program for this session to deal with this topic, but an old and vexing problem is dual-mode devices. It is an issue which needs some kind of unified approach, and, in my opinion, consensus about policy and methodology.
As a very good example if this problem, several of the cameras that I have supported as GSPCA devices in their webcam modality are also still cameras and are supported, as still cameras, in Gphoto. This can cause a collision between driver software in userspace which functions with libusb, and on the other hand with a kernel driver which tries to grab the device.
Recent attempts to deal with this problem involve the incorporation of code in libusb which disables a kernel module that has already grabbed the device, allowing the userspace driver to function. This has made life a little bit easier for some people, but not for everybody. For, the device needs to be re-plugged in order to re-activate the kernel support. But some of the "user-friencly" desktop setups used by some distros will automatically start up a dual-mode camera with a gphoto-based program, thereby making it impossible for the camera to be used as a webcam unless the user goes for a crash course in how to disable the "feature" which has been so thoughtfully (thoughtlessly?) provided.
As the problem is not confined to cameras but also affects some other devices, such as DSL modems which have a partition on them and are thus seen as Mass Storage devices, perhaps it is time to try to find a systematic approach to problems like this.
There are of course several possible approaches.
- A kernel module should handle everything related to connecting up the
hardware. In that case, the existing userspace driver has to be modified to use the kernel module instead of libusb. Those who support this option would say that it gets everything under the control of the kernel, where it belongs. OTOG, the possible result is to create a minor mess in projects like Gphoto.
- The kernel module should be abolished, and all of its functionality
moved to userspace. This would of course involve difficulties approximately equivalent to item 1. An advantage, in the eyes of some, would be to cut down on the yet-another-driver-for-yet-another-piece-of-peculiar-hardware syndrome which obviously contributes to an in principle unlimited increase in the size of the kernel codebase. A disadvantage would be that it would create some disruption in webcam support.
- A further modification to libusb reactivates the kernel module
automatically, as soon as the userspace app which wanted to access the device through a libusb-based driver library is closed. This seems attractive, but it has certain deficiencies as well. One of them is that it can not necessarily provide a smooth and informative user experience, since circumstances can occur in which something appears to go wrong, but the user gets no clear message saying what the problem is. In other words, it is a patchwork solution which only slightly refines the current patchwork solution in libusb, which is in itself only a slight improvement on the original, unaddressed problem.
- ???
Several people are interested in this problem, but not much progress has been made at this time. I think that the topic ought to be put somehow on the front burner so that lots of people will try to think of the best way to handle it. Many eyes, and all that.
Not saying change your schedule, as I said. Have a nice conference. I wish I could attend. But I do hope by this message to raise some general concern about this problem.
I meant this. Two ways. First, I knew when the conference was announced that it would severely conflict with the schedule of my workplace (right after the start of the academic semester). So I had simply to write off a conference which I really think I would have enjoyed attending.
Ah, I see.
Second, I am hoping to raise general interest in a rather vexing issue. The problem here, in a nutshell, originates from a conflict between user convenience and the Linux security model. Nobody wants to sacrifice either of these. More cleverness is needed.
That's an interesting issue.
Yes.
A solution like (3) is a little bit out of scope, as it is a pure userspace (or a mixed userspace USB stack) solution.
And does not completely solve the problem, either.
Technically speaking, letting the same device being handled by either an userspace or a kernelspace driver doesn't seem smart to me, due to:
- Duplicated efforts to maintain both drivers;
- It is hard to sync a kernel driver with an userspace driver,
as you've pointed.
So, we're between (1) or (2).
Moving the solution entirely to userspace will have, additionally, the problem of having two applications trying to access the same hardware using two different userspace instances (for example, an incoming videoconf call while Gphoto is opened, assuming that such videoconf call would also have an userspace driver).
Yes, that kind of thing is an obvious problem. Actually, though, it may be that this had just better not happen. For some of the hardware that I know of, it could be a real problem no matter what approach would be taken. For example, certain specific dual-mode cameras will delete all data stored on the camera if the camera is fired up in webcam mode. To drop Gphoto suddenly in order to do the videoconf call would, on such cameras, result in the automatic deletion of all photos on the camera even if those photos had not yet been downloaded. Presumably, one would not want to do that.
So, in other words, the Kernel driver should return -EBUSY if on such cameras, if there are photos stored on them, and someone tries to stream.
IMO, the right solution is to work on a proper snapshot mode, in kernelspace, and moving the drivers that have already a kernelspace out of Gphoto.
Well, the problem with that is, a still camera and a webcam are entirely different beasts. Still photos stored in the memory of an external device, waiting to be downloaded, are not snapshots. Thus, access to those still photos is not access to snapshots. Things are not that simple.
Yes, stored photos require a different API, as Hans pointed. I think that some cameras just export them as a USB storage. For those, we may eventually need some sort of locking between the USB storage and V4L.
That's said, there is a proposed topic for snapshot buffer management. Maybe it may cover the remaining needs for taking high quality pictures in Kernel.
Again, when downloading photo images which are _stored_ on the camera one is not "taking high quality pictures." Different functionality is involved. This may involve, for example, a different Altsetting for the USB device and may also require the use of Bulk transport instead of Isochronous transport.
Ok. The gspca driver supports it already. All we need to do is to implement a proper API for retrieving still photos.
The hole idea is to allocate additional buffers for snapshots, imagining that the camera may be streaming in low quality/low resolution, and, once snapshot is requested, it will take one high quality/high resolution picture.
The ability to "take" a photo is present on some still cameras and not on others. "Some still cameras" includes some dual-mode cameras. For dual-mode cameras which can be requested to "take" a photo while running in webcam mode, the ability to do so is, generally speaking, present in the kernel driver.
To present the problem more simply, a webcam is, essentially, a device of USB class Video (even if the device uses proprietary protocols, this is at least conceptually true). This is true because a webcam streams video data. However, a still camera is, in its essence as a computer peripheral, a USB mass storage device (even if the device has a proprietary protocol and even if it will not do everything one would expect from a normal mass storage device). That is, a still camera can be considered as a device which contains data, and one needs to get the data from there to the computer, and then to process said data. It is when the two different kinds of device are married together in one piece of physical hardware, with the same USB Vendor:Product code, that trouble follows.
We'll need to split the problem on all possible alternatives, as the solution may be different for each.
If I understood you well, there are 4 possible ways:
1) UVC + USB mass storage; 2) UVC + Vendor Class mass storage; 3) Vendor Class video + USB mass storage; 4) Vendor Class video + Vendor Class mass storage.
For (1) and (3), it doesn't make sense to re-implement USB mass storage on V4L. We may just need some sort of resource locking, if the device can't provide both ways at the same time.
For (2) and (4), we'll need an extra API like what Hans is proposing, plus a resource locking schema.
That's said, "resource locking" is currently one big problem we need to solve on the media subsystem.
We have already some problems like that on devices that implement both V4L and DVB API's. For example, you can't use the same tuner to watch analog and digital TV at the same time. Also, several devices have I2C switches. You can't, for example, poll for a RC code while the I2C switch is opened for tuner access.
This is the same kind of problem, for example, that happens with 3G modems that can work either as USB storage or as modem.
This sounds to be a good theme for the Workshop, or even to KS/2011.
I suggest that we continue this discussion after the conference. I expect that you and several others who I think are interested in this topic are rather busy getting ready for the conference. I also hope that some of those people read this, since I think that a general discussion is needed. The problem will, after all, not go away. It has been with us for years.
Theodore Kilgore
(Added Hans to the reply. I already knew that he shares my concerns about this issue, and I am glad he has joined the discussion.)
On Thu, 4 Aug 2011, Mauro Carvalho Chehab wrote:
Em 03-08-2011 20:20, Theodore Kilgore escreveu:
On Wed, 3 Aug 2011, Mauro Carvalho Chehab wrote:
Em 03-08-2011 16:53, Theodore Kilgore escreveu:
On Wed, 3 Aug 2011, Mauro Carvalho Chehab wrote:
As already announced, we're continuing the planning for this year's media subsystem workshop.
To avoid overriding the main ML with workshop-specifics, a new ML was created: workshop-2011@linuxtv.org
I'll also be updating the event page at: http://www.linuxtv.org/events.php
Over the one-year period, we had 242 developers contributing to the subsystem. Thank you all for that! Unfortunately, the space there is limited, and we can't affort to have all developers there.
Due to that some criteria needed to be applied to create a short list of people that were invited today to participate.
The main criteria were to select the developers that did significant contributions for the media subsystem over the last 1 year period, measured in terms of number of commits and changed lines to the kernel drivers/media tree.
As the used criteria were the number of kernel patches, userspace-only developers weren't included on the invitations. It would be great to have there open source application developers as well, in order to allow us to tune what's needed from applications point of view.
So, if you're leading the development of some V4L and/or DVB open-source application and wants to be there, or you think you can give good contributions for helping to improve the subsystem, please feel free to send us an email.
With regards to the themes, we're received, up to now, the following proposals:
---------------------------------------------------------+---------------------- THEME | Proposed-by: ---------------------------------------------------------+---------------------- Buffer management: snapshot mode | Guennadi Rotation in webcams in tablets while streaming is active | Hans de Goede V4L2 Spec ? ambiguities fix | Hans Verkuil V4L2 compliance test results | Hans Verkuil Media Controller presentation (probably for Wed, 25) | Laurent Pinchart Workshop summary presentation on Wed, 25 | Mauro Carvalho Chehab ---------------------------------------------------------+----------------------
From my side, I also have the following proposals:
- DVB API consistency - what to do with the audio and video DVB API's
that conflict with V4L2 and (somewhat) with ALSA?
- Multi FE support - How should we handle a frontend with multiple
delivery systems like DRX-K frontend?
videobuf2 - migration plans for legacy drivers
NEC IR decoding - how should we handle 32, 24, and 16 bit protocol
variations?
Even if you won't be there, please feel free to propose themes for discussion, in order to help us to improve even more the subsystem.
Thank you! Mauro
Mauro,
Not saying that you need to change the program for this session to deal with this topic, but an old and vexing problem is dual-mode devices. It is an issue which needs some kind of unified approach, and, in my opinion, consensus about policy and methodology.
As a very good example if this problem, several of the cameras that I have supported as GSPCA devices in their webcam modality are also still cameras and are supported, as still cameras, in Gphoto. This can cause a collision between driver software in userspace which functions with libusb, and on the other hand with a kernel driver which tries to grab the device.
Recent attempts to deal with this problem involve the incorporation of code in libusb which disables a kernel module that has already grabbed the device, allowing the userspace driver to function. This has made life a little bit easier for some people, but not for everybody. For, the device needs to be re-plugged in order to re-activate the kernel support. But some of the "user-friencly" desktop setups used by some distros will automatically start up a dual-mode camera with a gphoto-based program, thereby making it impossible for the camera to be used as a webcam unless the user goes for a crash course in how to disable the "feature" which has been so thoughtfully (thoughtlessly?) provided.
As the problem is not confined to cameras but also affects some other devices, such as DSL modems which have a partition on them and are thus seen as Mass Storage devices, perhaps it is time to try to find a systematic approach to problems like this.
There are of course several possible approaches.
- A kernel module should handle everything related to connecting up the
hardware. In that case, the existing userspace driver has to be modified to use the kernel module instead of libusb. Those who support this option would say that it gets everything under the control of the kernel, where it belongs. OTOG, the possible result is to create a minor mess in projects like Gphoto.
- The kernel module should be abolished, and all of its functionality
moved to userspace. This would of course involve difficulties approximately equivalent to item 1. An advantage, in the eyes of some, would be to cut down on the yet-another-driver-for-yet-another-piece-of-peculiar-hardware syndrome which obviously contributes to an in principle unlimited increase in the size of the kernel codebase. A disadvantage would be that it would create some disruption in webcam support.
- A further modification to libusb reactivates the kernel module
automatically, as soon as the userspace app which wanted to access the device through a libusb-based driver library is closed. This seems attractive, but it has certain deficiencies as well. One of them is that it can not necessarily provide a smooth and informative user experience, since circumstances can occur in which something appears to go wrong, but the user gets no clear message saying what the problem is. In other words, it is a patchwork solution which only slightly refines the current patchwork solution in libusb, which is in itself only a slight improvement on the original, unaddressed problem.
- ???
Several people are interested in this problem, but not much progress has been made at this time. I think that the topic ought to be put somehow on the front burner so that lots of people will try to think of the best way to handle it. Many eyes, and all that.
Not saying change your schedule, as I said. Have a nice conference. I wish I could attend. But I do hope by this message to raise some general concern about this problem.
I meant this. Two ways. First, I knew when the conference was announced that it would severely conflict with the schedule of my workplace (right after the start of the academic semester). So I had simply to write off a conference which I really think I would have enjoyed attending.
Ah, I see.
Exactly.
Second, I am hoping to raise general interest in a rather vexing issue. The problem here, in a nutshell, originates from a conflict between user convenience and the Linux security model. Nobody wants to sacrifice either of these. More cleverness is needed.
That's an interesting issue.
Yes.
A solution like (3) is a little bit out of scope, as it is a pure userspace (or a mixed userspace USB stack) solution.
And does not completely solve the problem, either.
Technically speaking, letting the same device being handled by either an userspace or a kernelspace driver doesn't seem smart to me, due to:
- Duplicated efforts to maintain both drivers;
- It is hard to sync a kernel driver with an userspace driver,
as you've pointed.
So, we're between (1) or (2).
Moving the solution entirely to userspace will have, additionally, the problem of having two applications trying to access the same hardware using two different userspace instances (for example, an incoming videoconf call while Gphoto is opened, assuming that such videoconf call would also have an userspace driver).
Yes, that kind of thing is an obvious problem. Actually, though, it may be that this had just better not happen. For some of the hardware that I know of, it could be a real problem no matter what approach would be taken. For example, certain specific dual-mode cameras will delete all data stored on the camera if the camera is fired up in webcam mode. To drop Gphoto suddenly in order to do the videoconf call would, on such cameras, result in the automatic deletion of all photos on the camera even if those photos had not yet been downloaded. Presumably, one would not want to do that.
Some of the sq905 cameras in particular will do this. It depends upon the firmware version. Indeed, for those which do, the same USB command which starts streaming is exploited in the Gphoto driver for deletion of all photos stored on the camera. For the other firmware versions, there is in fact no way to delete all the photos, except to push buttons on the camera case. This is by the way a typical example of the very rudimentary, minimalist interface of some of these cheap cameras.
So, in other words, the Kernel driver should return -EBUSY if on such cameras, if there are photos stored on them, and someone tries to stream.
Probably, this should work the other way around, too. If not, then there is the question of closing the streaming in some kind of orderly fashion.
IMO, the right solution is to work on a proper snapshot mode, in kernelspace, and moving the drivers that have already a kernelspace out of Gphoto.
Well, the problem with that is, a still camera and a webcam are entirely different beasts. Still photos stored in the memory of an external device, waiting to be downloaded, are not snapshots. Thus, access to those still photos is not access to snapshots. Things are not that simple.
Yes, stored photos require a different API, as Hans pointed.
Yes again. His observations seem to me to be saying exactly the same thing that I did.
I think that some cameras just export them as a USB storage. For those, we may eventually need some sort of locking between the USB storage and V4L.
I can imagine that this could be the case. Also, to be entirely logical, one might imagine that a PTP camera could be fired up in streaming mode, too. I myself do not know of any cameras which are both USB storage and streaming cameras. In fact, as I understand the USB classes, such a thing would be in principle forbidden. However, the practical consequence could be that sooner or later someone is going to do just that and that deviant hardware is going to sell like hotcakes and we are going to get pestered.
That's said, there is a proposed topic for snapshot buffer management. Maybe it may cover the remaining needs for taking high quality pictures in Kernel.
Again, when downloading photo images which are _stored_ on the camera one is not "taking high quality pictures." Different functionality is involved. This may involve, for example, a different Altsetting for the USB device and may also require the use of Bulk transport instead of Isochronous transport.
Ok. The gspca driver supports it already. All we need to do is to implement a proper API for retrieving still photos.
Yes, I believe that Hans has some idea to do something like this:
1. kernel module creates a stillcam device as well as a /dev/video, for those cameras for which it is appropriate
2. libgphoto2 driver is modified so as to access /dev/camera through the kernel, instead of talking to the camera through libusb.
Hans has written some USB Mass Storage digital picture frame drivers for Gphoto, which do something similar.
The hole idea is to allocate additional buffers for snapshots, imagining that the camera may be streaming in low quality/low resolution, and, once snapshot is requested, it will take one high quality/high resolution picture.
The ability to "take" a photo is present on some still cameras and not on others. "Some still cameras" includes some dual-mode cameras. For dual-mode cameras which can be requested to "take" a photo while running in webcam mode, the ability to do so is, generally speaking, present in the kernel driver.
To present the problem more simply, a webcam is, essentially, a device of USB class Video (even if the device uses proprietary protocols, this is at least conceptually true). This is true because a webcam streams video data. However, a still camera is, in its essence as a computer peripheral, a USB mass storage device (even if the device has a proprietary protocol and even if it will not do everything one would expect from a normal mass storage device). That is, a still camera can be considered as a device which contains data, and one needs to get the data from there to the computer, and then to process said data. It is when the two different kinds of device are married together in one piece of physical hardware, with the same USB Vendor:Product code, that trouble follows.
We'll need to split the problem on all possible alternatives, as the solution may be different for each.
That, I think, is true.
If I understood you well, there are 4 possible ways:
- UVC + USB mass storage;
- UVC + Vendor Class mass storage;
The two above are probably precluded by the USB specs. Which might mean that somebody is going to do that anyway, of course. So far, in the rare cases that such a thing has come up, the device itself is a "good citizen" in that it has two Vendor:Product codes, not just one, and something has to be done (pushing physical buttons, or so) to make it be seen as the "other kind of device" when it is plugged to the computer.
- Vendor Class video + USB mass storage;
Probably the same as the two items above.
- Vendor Class video + Vendor Class mass storage.
This one is where practically all of the trouble occurs. Vendor Class means exactly that the manufacturer can do whatever seems clever, or cheap, and they do.
For (1) and (3), it doesn't make sense to re-implement USB mass storage on V4L. We may just need some sort of resource locking, if the device can't provide both ways at the same time.
For (2) and (4), we'll need an extra API like what Hans is proposing, plus a resource locking schema.
As I said, it is difficult for me to imagine how all four cases can or will come up in practice. But it probably is good to include them, at least conceptually.
That's said, "resource locking" is currently one big problem we need to solve on the media subsystem.
We have already some problems like that on devices that implement both V4L and DVB API's. For example, you can't use the same tuner to watch analog and digital TV at the same time. Also, several devices have I2C switches. You can't, for example, poll for a RC code while the I2C switch is opened for tuner access.
This is the same kind of problem, for example, that happens with 3G modems that can work either as USB storage or as modem.
Yes. It does. And the matter has given similar headaches to the mass-storage people, which, I understand, are at least partially addressed. But this underscores one of my original points: this is a general problem, not exclusively confined to cameras or to media support. The fundamental problem is to deal with hardware which sits in two categories and does two different things.
This sounds to be a good theme for the Workshop, or even to KS/2011.
Thanks. Do you recall when and where is KS/2011 going to take place?
Theodore Kilgore
Em 04-08-2011 15:37, Theodore Kilgore escreveu:
Yes, that kind of thing is an obvious problem. Actually, though, it may be that this had just better not happen. For some of the hardware that I know of, it could be a real problem no matter what approach would be taken. For example, certain specific dual-mode cameras will delete all data stored on the camera if the camera is fired up in webcam mode. To drop Gphoto suddenly in order to do the videoconf call would, on such cameras, result in the automatic deletion of all photos on the camera even if those photos had not yet been downloaded. Presumably, one would not want to do that.
Some of the sq905 cameras in particular will do this. It depends upon the firmware version. Indeed, for those which do, the same USB command which starts streaming is exploited in the Gphoto driver for deletion of all photos stored on the camera. For the other firmware versions, there is in fact no way to delete all the photos, except to push buttons on the camera case. This is by the way a typical example of the very rudimentary, minimalist interface of some of these cheap cameras.
So, in other words, the Kernel driver should return -EBUSY if on such cameras, if there are photos stored on them, and someone tries to stream.
Probably, this should work the other way around, too. If not, then there is the question of closing the streaming in some kind of orderly fashion.
Yes.
IMO, the right solution is to work on a proper snapshot mode, in kernelspace, and moving the drivers that have already a kernelspace out of Gphoto.
Well, the problem with that is, a still camera and a webcam are entirely different beasts. Still photos stored in the memory of an external device, waiting to be downloaded, are not snapshots. Thus, access to those still photos is not access to snapshots. Things are not that simple.
Yes, stored photos require a different API, as Hans pointed.
Yes again. His observations seem to me to be saying exactly the same thing that I did.
I think that some cameras just export them as a USB storage. For those, we may eventually need some sort of locking between the USB storage and V4L.
I can imagine that this could be the case. Also, to be entirely logical, one might imagine that a PTP camera could be fired up in streaming mode, too. I myself do not know of any cameras which are both USB storage and streaming cameras. In fact, as I understand the USB classes, such a thing would be in principle forbidden.
It is possible to use a single USB ID, and having two (or more) interfaces there, each belonging to a different USB class. Anyway, while abstracting the proper solution, it is safer to consider it as a possible scenario.
However, the practical consequence could be that sooner or later someone is going to do just that and that deviant hardware is going to sell like hotcakes and we are going to get pestered.
Yes.
That's said, there is a proposed topic for snapshot buffer management. Maybe it may cover the remaining needs for taking high quality pictures in Kernel.
Again, when downloading photo images which are _stored_ on the camera one is not "taking high quality pictures." Different functionality is involved. This may involve, for example, a different Altsetting for the USB device and may also require the use of Bulk transport instead of Isochronous transport.
Ok. The gspca driver supports it already. All we need to do is to implement a proper API for retrieving still photos.
Yes, I believe that Hans has some idea to do something like this:
- kernel module creates a stillcam device as well as a /dev/video, for
those cameras for which it is appropriate
- libgphoto2 driver is modified so as to access /dev/camera through the
kernel, instead of talking to the camera through libusb.
Hans has written some USB Mass Storage digital picture frame drivers for Gphoto, which do something similar.
The above strategy seems OK for me.
The hole idea is to allocate additional buffers for snapshots, imagining that the camera may be streaming in low quality/low resolution, and, once snapshot is requested, it will take one high quality/high resolution picture.
The ability to "take" a photo is present on some still cameras and not on others. "Some still cameras" includes some dual-mode cameras. For dual-mode cameras which can be requested to "take" a photo while running in webcam mode, the ability to do so is, generally speaking, present in the kernel driver.
To present the problem more simply, a webcam is, essentially, a device of USB class Video (even if the device uses proprietary protocols, this is at least conceptually true). This is true because a webcam streams video data. However, a still camera is, in its essence as a computer peripheral, a USB mass storage device (even if the device has a proprietary protocol and even if it will not do everything one would expect from a normal mass storage device). That is, a still camera can be considered as a device which contains data, and one needs to get the data from there to the computer, and then to process said data. It is when the two different kinds of device are married together in one piece of physical hardware, with the same USB Vendor:Product code, that trouble follows.
We'll need to split the problem on all possible alternatives, as the solution may be different for each.
That, I think, is true.
If I understood you well, there are 4 possible ways:
- UVC + USB mass storage;
- UVC + Vendor Class mass storage;
The two above are probably precluded by the USB specs. Which might mean that somebody is going to do that anyway, of course. So far, in the rare cases that such a thing has come up, the device itself is a "good citizen" in that it has two Vendor:Product codes, not just one, and something has to be done (pushing physical buttons, or so) to make it be seen as the "other kind of device" when it is plugged to the computer.
Some of the em28xx devices export Audio Vendor Class and Vendor Class for video both using the same USB ID (but on different interfaces). Kernel handles such devices fine: for each interface, it probes the driver again. So, snd-usb-audio handles the audio device, and em28xx handles the video part. In this specific example, the devices are well behaviored, as the USB driver doesn't need to share any kind of resource locking with the video driver. The cx231xx chips use a similar approach, except that one device is for Remote Controller (an HID-like MCE vendor class interface), and the other one is for video and audio.
Yet, I don't doubt that we may find bad behaviored citizens there.
- Vendor Class video + USB mass storage;
Probably the same as the two items above.
- Vendor Class video + Vendor Class mass storage.
This one is where practically all of the trouble occurs. Vendor Class means exactly that the manufacturer can do whatever seems clever, or cheap, and they do.
So, we need to solve this problem first.
For (1) and (3), it doesn't make sense to re-implement USB mass storage on V4L. We may just need some sort of resource locking, if the device can't provide both ways at the same time.
For (2) and (4), we'll need an extra API like what Hans is proposing, plus a resource locking schema.
As I said, it is difficult for me to imagine how all four cases can or will come up in practice. But it probably is good to include them, at least conceptually.
Yes.
That's said, "resource locking" is currently one big problem we need to solve on the media subsystem.
We have already some problems like that on devices that implement both V4L and DVB API's. For example, you can't use the same tuner to watch analog and digital TV at the same time. Also, several devices have I2C switches. You can't, for example, poll for a RC code while the I2C switch is opened for tuner access.
This is the same kind of problem, for example, that happens with 3G modems that can work either as USB storage or as modem.
Yes. It does. And the matter has given similar headaches to the mass-storage people, which, I understand, are at least partially addressed.
Yes, but I'm not sure if it was properly addressed. I have one device here that has 3 different functions: USB mass storage, 3G modem and ISDB-T digital TV. Currently, it has no Linux Driver, so, I'm not sure what are the common resources, but this probably means that some manufacturers are integrating more functions into a single device. I won't doubt that the current approach will fail with more devices.
But this underscores one of my original points: this is a general problem, not exclusively confined to cameras or to media support. The fundamental problem is to deal with hardware which sits in two categories and does two different things.
Yes.
This sounds to be a good theme for the Workshop, or even to KS/2011.
Thanks. Do you recall when and where is KS/2011 going to take place?
The media workshop happens together with the KS/2011. Sunday is an exclusive day for the workshops, Monday is an exclusive day for KS/2011, and Tuesday is a joint day for both KS and the KS workshops.
Regards, Mauro
On Thu, 4 Aug 2011, Mauro Carvalho Chehab wrote:
Em 04-08-2011 15:37, Theodore Kilgore escreveu:
Yes, that kind of thing is an obvious problem. Actually, though, it may be that this had just better not happen. For some of the hardware that I know of, it could be a real problem no matter what approach would be taken. For example, certain specific dual-mode cameras will delete all data stored on the camera if the camera is fired up in webcam mode. To drop Gphoto suddenly in order to do the videoconf call would, on such cameras, result in the automatic deletion of all photos on the camera even if those photos had not yet been downloaded. Presumably, one would not want to do that.
Some of the sq905 cameras in particular will do this. It depends upon the firmware version. Indeed, for those which do, the same USB command which starts streaming is exploited in the Gphoto driver for deletion of all photos stored on the camera. For the other firmware versions, there is in fact no way to delete all the photos, except to push buttons on the camera case. This is by the way a typical example of the very rudimentary, minimalist interface of some of these cheap cameras.
So, in other words, the Kernel driver should return -EBUSY if on such cameras, if there are photos stored on them, and someone tries to stream.
Probably, this should work the other way around, too. If not, then there is the question of closing the streaming in some kind of orderly fashion.
Yes.
IMO, the right solution is to work on a proper snapshot mode, in kernelspace, and moving the drivers that have already a kernelspace out of Gphoto.
Well, the problem with that is, a still camera and a webcam are entirely different beasts. Still photos stored in the memory of an external device, waiting to be downloaded, are not snapshots. Thus, access to those still photos is not access to snapshots. Things are not that simple.
Yes, stored photos require a different API, as Hans pointed.
Yes again. His observations seem to me to be saying exactly the same thing that I did.
I think that some cameras just export them as a USB storage. For those, we may eventually need some sort of locking between the USB storage and V4L.
I can imagine that this could be the case. Also, to be entirely logical, one might imagine that a PTP camera could be fired up in streaming mode, too. I myself do not know of any cameras which are both USB storage and streaming cameras. In fact, as I understand the USB classes, such a thing would be in principle forbidden.
It is possible to use a single USB ID, and having two (or more) interfaces there, each belonging to a different USB class.
True. However, unfortunate exceptions are found in the set of sq905 cameras and sq905c cameras, which have only Interface 0 (and, of course, use only Bulk Transport for all data regardless of its nature).
Anyway, while abstracting
the proper solution, it is safer to consider it as a possible scenario.
However, the practical consequence could be that sooner or later someone is going to do just that and that deviant hardware is going to sell like hotcakes and we are going to get pestered.
Yes.
That's said, there is a proposed topic for snapshot buffer management. Maybe it may cover the remaining needs for taking high quality pictures in Kernel.
Again, when downloading photo images which are _stored_ on the camera one is not "taking high quality pictures." Different functionality is involved. This may involve, for example, a different Altsetting for the USB device and may also require the use of Bulk transport instead of Isochronous transport.
Ok. The gspca driver supports it already. All we need to do is to implement a proper API for retrieving still photos.
Yes, I believe that Hans has some idea to do something like this:
- kernel module creates a stillcam device as well as a /dev/video, for
those cameras for which it is appropriate
- libgphoto2 driver is modified so as to access /dev/camera through the
kernel, instead of talking to the camera through libusb.
Hans has written some USB Mass Storage digital picture frame drivers for Gphoto, which do something similar.
The above strategy seems OK for me.
The hole idea is to allocate additional buffers for snapshots, imagining that the camera may be streaming in low quality/low resolution, and, once snapshot is requested, it will take one high quality/high resolution picture.
The ability to "take" a photo is present on some still cameras and not on others. "Some still cameras" includes some dual-mode cameras. For dual-mode cameras which can be requested to "take" a photo while running in webcam mode, the ability to do so is, generally speaking, present in the kernel driver.
To present the problem more simply, a webcam is, essentially, a device of USB class Video (even if the device uses proprietary protocols, this is at least conceptually true). This is true because a webcam streams video data. However, a still camera is, in its essence as a computer peripheral, a USB mass storage device (even if the device has a proprietary protocol and even if it will not do everything one would expect from a normal mass storage device). That is, a still camera can be considered as a device which contains data, and one needs to get the data from there to the computer, and then to process said data. It is when the two different kinds of device are married together in one piece of physical hardware, with the same USB Vendor:Product code, that trouble follows.
We'll need to split the problem on all possible alternatives, as the solution may be different for each.
That, I think, is true.
If I understood you well, there are 4 possible ways:
- UVC + USB mass storage;
- UVC + Vendor Class mass storage;
The two above are probably precluded by the USB specs. Which might mean that somebody is going to do that anyway, of course. So far, in the rare cases that such a thing has come up, the device itself is a "good citizen" in that it has two Vendor:Product codes, not just one, and something has to be done (pushing physical buttons, or so) to make it be seen as the "other kind of device" when it is plugged to the computer.
Some of the em28xx devices export Audio Vendor Class and Vendor Class for video both using the same USB ID (but on different interfaces). Kernel handles such devices fine: for each interface, it probes the driver again. So, snd-usb-audio handles the audio device, and em28xx handles the video part. In this specific example, the devices are well behaviored, as the USB driver doesn't need to share any kind of resource locking with the video driver. The cx231xx chips use a similar approach, except that one device is for Remote Controller (an HID-like MCE vendor class interface), and the other one is for video and audio.
Yet, I don't doubt that we may find bad behaviored citizens there.
That is almost certain to occur. I am not sure if the cause is natural law, or original sin. But it is bound to happen.
- Vendor Class video + USB mass storage;
Probably the same as the two items above.
- Vendor Class video + Vendor Class mass storage.
This one is where practically all of the trouble occurs. Vendor Class means exactly that the manufacturer can do whatever seems clever, or cheap, and they do.
So, we need to solve this problem first.
For (1) and (3), it doesn't make sense to re-implement USB mass storage on V4L. We may just need some sort of resource locking, if the device can't provide both ways at the same time.
For (2) and (4), we'll need an extra API like what Hans is proposing, plus a resource locking schema.
As I said, it is difficult for me to imagine how all four cases can or will come up in practice. But it probably is good to include them, at least conceptually.
Yes.
That's said, "resource locking" is currently one big problem we need to solve on the media subsystem.
We have already some problems like that on devices that implement both V4L and DVB API's. For example, you can't use the same tuner to watch analog and digital TV at the same time. Also, several devices have I2C switches. You can't, for example, poll for a RC code while the I2C switch is opened for tuner access.
This is the same kind of problem, for example, that happens with 3G modems that can work either as USB storage or as modem.
Yes. It does. And the matter has given similar headaches to the mass-storage people, which, I understand, are at least partially addressed.
Yes, but I'm not sure if it was properly addressed.
I did not follow closely what they did about the issue; I am only aware that they confronted it.
I have one device here
that has 3 different functions: USB mass storage, 3G modem and ISDB-T digital TV. Currently, it has no Linux Driver, so, I'm not sure what are the common resources, but this probably means that some manufacturers are integrating more functions into a single device. I won't doubt that the current approach will fail with more devices.
But this underscores one of my original points: this is a general problem, not exclusively confined to cameras or to media support. The fundamental problem is to deal with hardware which sits in two categories and does two different things.
Yes.
Except, I should have said "two or more," it seems.
This sounds to be a good theme for the Workshop, or even to KS/2011.
Thanks. Do you recall when and where is KS/2011 going to take place?
The media workshop happens together with the KS/2011. Sunday is an exclusive day for the workshops, Monday is an exclusive day for KS/2011, and Tuesday is a joint day for both KS and the KS workshops.
So, as I understand, these are all about to take place in Vancouver, sometime in the next two weeks? It really is the wrong time, but I really wish now that I were going. I would at the very minimum try to get the people together that I know of, who have wrestled with the issue.
Theodore Kilgore
Em 04-08-2011 18:16, Theodore Kilgore escreveu:
This sounds to be a good theme for the Workshop, or even to KS/2011.
Thanks. Do you recall when and where is KS/2011 going to take place?
The media workshop happens together with the KS/2011. Sunday is an exclusive day for the workshops, Monday is an exclusive day for KS/2011, and Tuesday is a joint day for both KS and the KS workshops.
So, as I understand, these are all about to take place in Vancouver, sometime in the next two weeks? It really is the wrong time, but I really wish now that I were going. I would at the very minimum try to get the people together that I know of, who have wrestled with the issue.
Hmm... it seems that you didn't read the sites I've pointed on my original email, or that I was not clear enough.
The Media Subsystem Workshop and the Kernel Summit won't happen in Vancouver. What will happen there is the LinuxCon North-America, plus the USB mini-summit. I should be there, btw. I think I should add an additional topic there to discuss about multi-featured devices.
The KS/2011 and the Media Workshop will happen in Prague, on Oct 23-25, just before the LinuxCon Europe.
Regards, Mauro
On Thu, 4 Aug 2011, Mauro Carvalho Chehab wrote:
Em 04-08-2011 18:16, Theodore Kilgore escreveu:
This sounds to be a good theme for the Workshop, or even to KS/2011.
Thanks. Do you recall when and where is KS/2011 going to take place?
The media workshop happens together with the KS/2011. Sunday is an exclusive day for the workshops, Monday is an exclusive day for KS/2011, and Tuesday is a joint day for both KS and the KS workshops.
So, as I understand, these are all about to take place in Vancouver, sometime in the next two weeks? It really is the wrong time, but I really wish now that I were going. I would at the very minimum try to get the people together that I know of, who have wrestled with the issue.
Hmm... it seems that you didn't read the sites I've pointed on my original email,
Not really, no. I had resigned myself to being unable to attend anything like this, so why torture myself with looking in the shop window at what I cannot buy?
or that I was not clear enough.
Without looking again, I expect that you were quite clear.
The Media Subsystem Workshop and the Kernel Summit won't happen in Vancouver. What will happen there is the LinuxCon North-America, plus the USB mini-summit. I should be there, btw. I think I should add an additional topic there to discuss about multi-featured devices.
A very good idea.
The KS/2011 and the Media Workshop will happen in Prague, on Oct 23-25, just before the LinuxCon Europe.
Hmmm. That is still not good because classes are in session. But it is not nearly so bad in the middle of a semester as it is at the beginning. It is even conceivable that I might be able to shake loose some money -- if I were either giving a presentation or would (for example) lead a panel discussion on this topic. I believe that I would find it easier to be a moderator or discussion "leader" than actually to present about a thing like this. Namely, I can see the issues but not always the solutions.
Probably, it is not good to apply to my university for money if I merely were going to attend; mere intent to attend would probably not get me funding for a mathematics conference, either. I also would need enough lead time to be able to get things through the bureaucratic system. There is some kind of very unreasonable deadline now in effect in the university about how soon one needs to apply for foreign travel.
So if you think my presence would have some value, I need something to get the application started, over here. Invitation, or something similar. If it is too much trouble or would interfere with already-existing plans, then never mind. I would hardly be upset if I don't go to something which I was not expecting to go to in the first place.
Theodore Kilgore
Hi all,
On 08/04/2011 02:34 PM, Mauro Carvalho Chehab wrote:
Em 03-08-2011 20:20, Theodore Kilgore escreveu:
<snip snip>
Yes, that kind of thing is an obvious problem. Actually, though, it may be that this had just better not happen. For some of the hardware that I know of, it could be a real problem no matter what approach would be taken. For example, certain specific dual-mode cameras will delete all data stored on the camera if the camera is fired up in webcam mode. To drop Gphoto suddenly in order to do the videoconf call would, on such cameras, result in the automatic deletion of all photos on the camera even if those photos had not yet been downloaded. Presumably, one would not want to do that.
So, in other words, the Kernel driver should return -EBUSY if on such cameras, if there are photos stored on them, and someone tries to stream.
Agreed.
IMO, the right solution is to work on a proper snapshot mode, in kernelspace, and moving the drivers that have already a kernelspace out of Gphoto.
Well, the problem with that is, a still camera and a webcam are entirely different beasts. Still photos stored in the memory of an external device, waiting to be downloaded, are not snapshots. Thus, access to those still photos is not access to snapshots. Things are not that simple.
Yes, stored photos require a different API, as Hans pointed. I think that some cameras just export them as a USB storage.
Erm, that is not what I tried to say, or do you mean another Hans?
<snip snip>
If I understood you well, there are 4 possible ways:
- UVC + USB mass storage;
- UVC + Vendor Class mass storage;
- Vendor Class video + USB mass storage;
- Vendor Class video + Vendor Class mass storage.
Actually the cameras Theodore and I are talking about here all fall into category 4. I expect devices which do any of 1-3 to properly use different interfaces for this, actually the different class specifications mandate that they use different interfaces for this.
This sounds to be a good theme for the Workshop, or even to KS/2011.
Agreed, although we don't need to talk about this for very long, the solution is basically: 1) Define a still image retrieval API for v4l2 devices (there is only 1 interface for both functions on these devices, so only 1 driver, and to me it makes sense to extend the existing drivers to also do still image retrieval). 2) Modify existing kernel v4l2 drivers to provide this API 3) Write a new libgphoto driver which talks this interface (only need to do one driver since all dual mode cams will export the same API).
1) is something to discuss at the workshop.
Regards,
Hans
On Fri, 5 Aug 2011, Hans de Goede wrote:
Hi all,
On 08/04/2011 02:34 PM, Mauro Carvalho Chehab wrote:
Em 03-08-2011 20:20, Theodore Kilgore escreveu:
<snip snip>
Yes, that kind of thing is an obvious problem. Actually, though, it may be that this had just better not happen. For some of the hardware that I know of, it could be a real problem no matter what approach would be taken. For example, certain specific dual-mode cameras will delete all data stored on the camera if the camera is fired up in webcam mode. To drop Gphoto suddenly in order to do the videoconf call would, on such cameras, result in the automatic deletion of all photos on the camera even if those photos had not yet been downloaded. Presumably, one would not want to do that.
So, in other words, the Kernel driver should return -EBUSY if on such cameras, if there are photos stored on them, and someone tries to stream.
Agreed.
Here, too. Not only that, but also -EBUSY needs to be returned if streaming is being done and someone tries to download photos (cf. yesterday's exchange between me and Adam Baker, where it was definitely established that this currently leads to bad stuff happening).
IMO, the right solution is to work on a proper snapshot mode, in kernelspace, and moving the drivers that have already a kernelspace out of Gphoto.
Well, the problem with that is, a still camera and a webcam are entirely different beasts. Still photos stored in the memory of an external device, waiting to be downloaded, are not snapshots. Thus, access to those still photos is not access to snapshots. Things are not that simple.
Yes, stored photos require a different API, as Hans pointed. I think that some cameras just export them as a USB storage.
Erm, that is not what I tried to say, or do you mean another Hans?
For the record, this one didn't come from me, either. :-)
<snip snip>
If I understood you well, there are 4 possible ways:
- UVC + USB mass storage;
- UVC + Vendor Class mass storage;
- Vendor Class video + USB mass storage;
- Vendor Class video + Vendor Class mass storage.
Actually the cameras Theodore and I are talking about here all fall into category 4.
Currently true, yes.
I expect devices which do any of 1-3 to properly use different interfaces for this, actually the different class specifications mandate that they use different interfaces for this.
As is well known, *everybody* obeys the class specifications, too. Always did, and always will. And Linus says that he got the original kernel from the Tooth Fairy, and because he said that we all believe him. The point being, trouble will very likely come along. I think Mauro is right at least to consider the possibility.
This sounds to be a good theme for the Workshop, or even to KS/2011.
Agreed, although we don't need to talk about this for very long, the solution is basically:
- Define a still image retrieval API for v4l2 devices (there is only 1
interface for both functions on these devices,
True
so only 1 driver, and to me it makes sense to extend the existing drivers to also do still image retrieval). 2) Modify existing kernel v4l2 drivers to provide this API 3) Write a new libgphoto driver which talks this interface (only need to do one driver since all dual mode cams will export the same API).
Yes, we pretty much agree that this is probably a good way to proceed. However, my curiosity is aroused by something that Adam mentioned yesterday. Namely
"If you can solve the locking problem between devices in the kernel then it shouldn't matter if one of the kernel devices is the generic device that is used to support libusb."
I am not completely sure of what he meant here. I am not intimately conversant with the internals of libusb. However, is there something here which could be used constructively? Could things be set up so that, for example, the kernel module hands the "generic device" over to libusb? If it were possible to do things that way, it might be the most minimally disruptive approach of all, since it might not require much if any changes in libgphoto2 access to cameras.
- is something to discuss at the workshop.
Regards,
Hans
Theodore Kilgore
On Friday 05 August 2011, Hans de Goede wrote:
This sounds to be a good theme for the Workshop, or even to KS/2011.
Agreed, although we don't need to talk about this for very long, the solution is basically:
Define a still image retrieval API for v4l2 devices (there is only 1 interface for both functions on these devices, so only 1 driver, and to me it makes sense to extend the existing drivers to also do still image retrieval).
Modify existing kernel v4l2 drivers to provide this API
Write a new libgphoto driver which talks this interface (only need to do one driver since all dual mode cams will export the same API).
is something to discuss at the workshop.
This approach sounds fine as long as you can come up with a definition for the API that covers the existing needs and is extensible when new cameras come along and doesn't create horrible inefficiencies by not matching the way some cameras work. I've only got one example of such a camera and it is a fairly basic one but things I can imagine the API needing to provide are
1) Report number of images on device 2) Select an image to read (for some cameras selecting next may be much more efficient than selecting at random although whether that inefficiency occurs when selecting, when reading image info or when reading image data may vary) 3) Read image information for selected image (resolution, compression type, FOURCC) 4) Read raw image data for selected image 5) Delete individual image (not supported by all cameras) 6) Delete all images (sometimes supported on cameras that don't support individual delete)
I'm not sure if any of these cameras support tethered capture but if they do then add Take photo Set resolution
I doubt if any of them support EXIF data, thumbnail images, the ability to upload images to the camera or any sound recording but if they do then those are additional things that gphoto2 would want to be able to do.
Regards
Adam
(first of two replies to Adam's message; second reply deals with other topics)
On Sun, 7 Aug 2011, Adam Baker wrote:
On Friday 05 August 2011, Hans de Goede wrote:
This sounds to be a good theme for the Workshop, or even to KS/2011.
Agreed, although we don't need to talk about this for very long, the solution is basically:
Define a still image retrieval API for v4l2 devices (there is only 1 interface for both functions on these devices, so only 1 driver, and to me it makes sense to extend the existing drivers to also do still image retrieval).
Modify existing kernel v4l2 drivers to provide this API
Write a new libgphoto driver which talks this interface (only need to do one driver since all dual mode cams will export the same API).
is something to discuss at the workshop.
This approach sounds fine as long as you can come up with a definition for the API that covers the existing needs and is extensible when new cameras come along and doesn't create horrible inefficiencies by not matching the way some cameras work. I've only got one example of such a camera and it is a fairly basic one but things I can imagine the API needing to provide are
- Report number of images on device
- Select an image to read (for some cameras selecting next may be much more
efficient than selecting at random although whether that inefficiency occurs when selecting, when reading image info or when reading image data may vary) 3) Read image information for selected image (resolution, compression type, FOURCC) 4) Read raw image data for selected image 5) Delete individual image (not supported by all cameras) 6) Delete all images (sometimes supported on cameras that don't support individual delete)
I'm not sure if any of these cameras support tethered capture but if they do then add Take photo Set resolution
I doubt if any of them support EXIF data, thumbnail images, the ability to upload images to the camera or any sound recording but if they do then those are additional things that gphoto2 would want to be able to do.
Adam,
Yipe. This looks to me like one inglorious mess. I do not know if it is feasible, or not, but I would wish for something much more simple. Namely, if the camera is not a dual-mode camera then nothing of this is necessary, of course. But if it is a dual-mode camera then the kernel driver is able to "hand off" the camera to a (libgphoto2-based) userspace driver which can handle all of the gory details of what the camera can do in its role as a still camera. This would imply that there is a device which libgphoto2 can access, presumably another device which is distinct from /dev/videoX, lets call it right now /dev/camX just to give it a name during the discussion.
So then what happens ought to be something like the following:
1. Camera is plugged in, detected, and kernel module is fired up. Then either
2a. A streaming app is started. Then, upon request from outside the kernel, the /dev/videoX is locked in and /dev/camX is locked out. The camera streams until told to quit streaming, and in the meantime any access to /dev/camX is not permitted. When the streaming is turned off, the lock is released.
or
2b. A stillcam app is started. Then similar to 2a, but the locking is reversed.
I think that this kind of thing would keep life simple. As I understand what Hans is envisioning, it is pretty much along the same lines, too. It would mean, of course, that the way that libgphoto2 would access one of these cameras would be directly to access the /dev/camX provided by the kernel, and not to use libusb. But that can be done, I think. As I mentioned before, Hans has written several libgphoto2 drivers for digital picture frames which are otherwise seen as USB mass storage devices. Something similar would have to be done with dual-mode cameras.
I will send a second reply to this message, which deals in particular with the list of abilities you outlined above. The point is, the situation as to that list of abilities is more chaotic than is generally realized. And when people are laying plans they really need to be aware of that.
Theodore Kilgore
Em 07-08-2011 23:26, Theodore Kilgore escreveu:
(first of two replies to Adam's message; second reply deals with other topics)
On Sun, 7 Aug 2011, Adam Baker wrote:
On Friday 05 August 2011, Hans de Goede wrote:
This sounds to be a good theme for the Workshop, or even to KS/2011.
Agreed, although we don't need to talk about this for very long, the solution is basically:
Define a still image retrieval API for v4l2 devices (there is only 1 interface for both functions on these devices, so only 1 driver, and to me it makes sense to extend the existing drivers to also do still image retrieval).
Modify existing kernel v4l2 drivers to provide this API
Write a new libgphoto driver which talks this interface (only need to do one driver since all dual mode cams will export the same API).
is something to discuss at the workshop.
This approach sounds fine as long as you can come up with a definition for the API that covers the existing needs and is extensible when new cameras come along and doesn't create horrible inefficiencies by not matching the way some cameras work. I've only got one example of such a camera and it is a fairly basic one but things I can imagine the API needing to provide are
- Report number of images on device
- Select an image to read (for some cameras selecting next may be much more
efficient than selecting at random although whether that inefficiency occurs when selecting, when reading image info or when reading image data may vary) 3) Read image information for selected image (resolution, compression type, FOURCC) 4) Read raw image data for selected image 5) Delete individual image (not supported by all cameras) 6) Delete all images (sometimes supported on cameras that don't support individual delete)
I'm not sure if any of these cameras support tethered capture but if they do then add Take photo Set resolution
I doubt if any of them support EXIF data, thumbnail images, the ability to upload images to the camera or any sound recording but if they do then those are additional things that gphoto2 would want to be able to do.
Adam,
Yipe. This looks to me like one inglorious mess. I do not know if it is feasible, or not, but I would wish for something much more simple. Namely, if the camera is not a dual-mode camera then nothing of this is necessary, of course. But if it is a dual-mode camera then the kernel driver is able to "hand off" the camera to a (libgphoto2-based) userspace driver which can handle all of the gory details of what the camera can do in its role as a still camera. This would imply that there is a device which libgphoto2 can access, presumably another device which is distinct from /dev/videoX, lets call it right now /dev/camX just to give it a name during the discussion.
So then what happens ought to be something like the following:
- Camera is plugged in, detected, and kernel module is fired up. Then
either
2a. A streaming app is started. Then, upon request from outside the kernel, the /dev/videoX is locked in and /dev/camX is locked out. The camera streams until told to quit streaming, and in the meantime any access to /dev/camX is not permitted. When the streaming is turned off, the lock is released.
or
2b. A stillcam app is started. Then similar to 2a, but the locking is reversed.
I think that this kind of thing would keep life simple. As I understand what Hans is envisioning, it is pretty much along the same lines, too. It would mean, of course, that the way that libgphoto2 would access one of these cameras would be directly to access the /dev/camX provided by the kernel, and not to use libusb. But that can be done, I think. As I mentioned before, Hans has written several libgphoto2 drivers for digital picture frames which are otherwise seen as USB mass storage devices. Something similar would have to be done with dual-mode cameras.
I will send a second reply to this message, which deals in particular with the list of abilities you outlined above. The point is, the situation as to that list of abilities is more chaotic than is generally realized. And when people are laying plans they really need to be aware of that.
From what I understood from your proposal, "/dev/camX" would be providing a
libusb-like interface, right?
If so, then, I'd say that we should just use the current libusb infrastructure. All we need is a way to lock libusb access when another driver is using the same USB interface.
Hans and Adam's proposal is to actually create a "/dev/camX" node that will give fs-like access to the pictures. As the data access to the cameras generally use PTP (or a PTP-like protocol), probably one driver will handle several different types of cameras, so, we'll end by having one different driver for PTP than the V4L driver.
In other words, part of libgphoto2 code will be moved into the Kernel, to allow abstracting the webcam differences into a common interface.
In summary, there are currently two proposals:
1) a resource lock for USB interface between V4L and libusb;
2) a PTP-like USB driver, plus a resource lock between V4L and the PTP-like driver. The same resource lock may also be implemented at libusb, in order to avoid concurrency.
As you said that streaming on some cameras may delete all pictures from it, I suspect that (2) is the best alternative.
Thanks, Mauro
On Mon, 8 Aug 2011, Mauro Carvalho Chehab wrote:
Em 07-08-2011 23:26, Theodore Kilgore escreveu:
(first of two replies to Adam's message; second reply deals with other topics)
On Sun, 7 Aug 2011, Adam Baker wrote:
On Friday 05 August 2011, Hans de Goede wrote:
This sounds to be a good theme for the Workshop, or even to KS/2011.
Agreed, although we don't need to talk about this for very long, the solution is basically:
Define a still image retrieval API for v4l2 devices (there is only 1 interface for both functions on these devices, so only 1 driver, and to me it makes sense to extend the existing drivers to also do still image retrieval).
Modify existing kernel v4l2 drivers to provide this API
Write a new libgphoto driver which talks this interface (only need to do one driver since all dual mode cams will export the same API).
is something to discuss at the workshop.
This approach sounds fine as long as you can come up with a definition for the API that covers the existing needs and is extensible when new cameras come along and doesn't create horrible inefficiencies by not matching the way some cameras work. I've only got one example of such a camera and it is a fairly basic one but things I can imagine the API needing to provide are
- Report number of images on device
- Select an image to read (for some cameras selecting next may be much more
efficient than selecting at random although whether that inefficiency occurs when selecting, when reading image info or when reading image data may vary) 3) Read image information for selected image (resolution, compression type, FOURCC) 4) Read raw image data for selected image 5) Delete individual image (not supported by all cameras) 6) Delete all images (sometimes supported on cameras that don't support individual delete)
I'm not sure if any of these cameras support tethered capture but if they do then add Take photo Set resolution
I doubt if any of them support EXIF data, thumbnail images, the ability to upload images to the camera or any sound recording but if they do then those are additional things that gphoto2 would want to be able to do.
Adam,
Yipe. This looks to me like one inglorious mess. I do not know if it is feasible, or not, but I would wish for something much more simple. Namely, if the camera is not a dual-mode camera then nothing of this is necessary, of course. But if it is a dual-mode camera then the kernel driver is able to "hand off" the camera to a (libgphoto2-based) userspace driver which can handle all of the gory details of what the camera can do in its role as a still camera. This would imply that there is a device which libgphoto2 can access, presumably another device which is distinct from /dev/videoX, lets call it right now /dev/camX just to give it a name during the discussion.
So then what happens ought to be something like the following:
- Camera is plugged in, detected, and kernel module is fired up. Then
either
2a. A streaming app is started. Then, upon request from outside the kernel, the /dev/videoX is locked in and /dev/camX is locked out. The camera streams until told to quit streaming, and in the meantime any access to /dev/camX is not permitted. When the streaming is turned off, the lock is released.
or
2b. A stillcam app is started. Then similar to 2a, but the locking is reversed.
I think that this kind of thing would keep life simple. As I understand what Hans is envisioning, it is pretty much along the same lines, too. It would mean, of course, that the way that libgphoto2 would access one of these cameras would be directly to access the /dev/camX provided by the kernel, and not to use libusb. But that can be done, I think. As I mentioned before, Hans has written several libgphoto2 drivers for digital picture frames which are otherwise seen as USB mass storage devices. Something similar would have to be done with dual-mode cameras.
I will send a second reply to this message, which deals in particular with the list of abilities you outlined above. The point is, the situation as to that list of abilities is more chaotic than is generally realized. And when people are laying plans they really need to be aware of that.
From what I understood from your proposal, "/dev/camX" would be providing a
libusb-like interface, right?
If so, then, I'd say that we should just use the current libusb infrastructure. All we need is a way to lock libusb access when another driver is using the same USB interface.
Hans and Adam's proposal is to actually create a "/dev/camX" node that will give fs-like access to the pictures. As the data access to the cameras generally use PTP (or a PTP-like protocol), probably one driver will handle several different types of cameras, so, we'll end by having one different driver for PTP than the V4L driver.
In other words, part of libgphoto2 code will be moved into the Kernel, to allow abstracting the webcam differences into a common interface.
In summary, there are currently two proposals:
a resource lock for USB interface between V4L and libusb;
a PTP-like USB driver, plus a resource lock between V4L and the PTP-like driver.
The same resource lock may also be implemented at libusb, in order to avoid concurrency.
As you said that streaming on some cameras may delete all pictures from it, I suspect that (2) is the best alternative.
Thanks, Mauro
Mauro,
In fact none of the currently known and supported cameras are using PTP. All of them are proprietary. They have a rather intimidating set of differences in functionality, too. Namely, some of them have an isochronous endpoint, and some of them rely exclusively upon bulk transport. Some of them have a well developed set of internal capabilities as far as handling still photos are concerned. I mean, such things as the ability to download a single photo, selected at random from the set of photos on the camera, and some do not, requiring that the "ability" to do this is emulated in software -- by first downloading all previously listed photos and sending the data to /dev/null, then downloading the desired photo and saving it. Some of them permit deletion of individual photos, or all photos, and some do not. For some of them it is even true, as I have previously mentioned, that the USB command string which will delete all photos is the same command used for starting the camera in streaming mode.
But the point here is that these cameras are all different from one another, depending upon chipset and even, sometimes, upon firmware or chipset version. The still camera abilities and limitations of all of them are pretty much worked out in libgphoto2. My suggestion would be that the libgphoto2 support libraries for these cameras ought to be left the hell alone, except for some changes in, for example, how the camera is accessed in the first place (through libusb or through a kernel device) in order to address adequately the need to support both modes. I know what is in those libgphoto2 drivers because I wrote them. I can definitely promise that to move all of that functionality over into kernel modules would be a nightmare and would moreover greatly contribute to kernel bloat. You really don't want to go there.
As to whether to use libusb or not to use libusb:
It would be very nice to be able to keep using libusb to get access to these cameras, as then no change in the existing stillcam drivers would be required at all. Furthermore, if it were possible to solve all of the associated locking problems and to do it this way, it would be something that could be generalized to any analogous situation.
This would be very nice. I can also imagine, of course, that such an approach might require changes in libusb. For example, the current ability of libusb itself to switch off a kernel device might possibly be a step in the wrong direction, and it might possibly be needed to move that function, somehow, out of libusb and into the kernel support for affected hardware.
In the alternative, it ought to be possible for a libgphoto2 driver to hook up directly to a kernel-created device without going through libusb, and, as I have said in earlier messages, some of our driver code (for digital picture frames in particular) does just that. Then, whatever /dev entries and associated locking problems are needed could be handled by the kernel, and libgphoto2 talks to the device. But if things are done this way I strongly suggest that as little of the internals of the libgphoto2 driver are put in the kernel as it is possible to do. Be very economical about that, else there will be a big mess.
Theodore Kilgore
Em 08-08-2011 14:39, Theodore Kilgore escreveu:
On Mon, 8 Aug 2011, Mauro Carvalho Chehab wrote:
Em 07-08-2011 23:26, Theodore Kilgore escreveu:
(first of two replies to Adam's message; second reply deals with other topics)
On Sun, 7 Aug 2011, Adam Baker wrote:
On Friday 05 August 2011, Hans de Goede wrote:
This sounds to be a good theme for the Workshop, or even to KS/2011.
Agreed, although we don't need to talk about this for very long, the solution is basically:
Define a still image retrieval API for v4l2 devices (there is only 1 interface for both functions on these devices, so only 1 driver, and to me it makes sense to extend the existing drivers to also do still image retrieval).
Modify existing kernel v4l2 drivers to provide this API
Write a new libgphoto driver which talks this interface (only need to do one driver since all dual mode cams will export the same API).
is something to discuss at the workshop.
This approach sounds fine as long as you can come up with a definition for the API that covers the existing needs and is extensible when new cameras come along and doesn't create horrible inefficiencies by not matching the way some cameras work. I've only got one example of such a camera and it is a fairly basic one but things I can imagine the API needing to provide are
- Report number of images on device
- Select an image to read (for some cameras selecting next may be much more
efficient than selecting at random although whether that inefficiency occurs when selecting, when reading image info or when reading image data may vary) 3) Read image information for selected image (resolution, compression type, FOURCC) 4) Read raw image data for selected image 5) Delete individual image (not supported by all cameras) 6) Delete all images (sometimes supported on cameras that don't support individual delete)
I'm not sure if any of these cameras support tethered capture but if they do then add Take photo Set resolution
I doubt if any of them support EXIF data, thumbnail images, the ability to upload images to the camera or any sound recording but if they do then those are additional things that gphoto2 would want to be able to do.
Adam,
Yipe. This looks to me like one inglorious mess. I do not know if it is feasible, or not, but I would wish for something much more simple. Namely, if the camera is not a dual-mode camera then nothing of this is necessary, of course. But if it is a dual-mode camera then the kernel driver is able to "hand off" the camera to a (libgphoto2-based) userspace driver which can handle all of the gory details of what the camera can do in its role as a still camera. This would imply that there is a device which libgphoto2 can access, presumably another device which is distinct from /dev/videoX, lets call it right now /dev/camX just to give it a name during the discussion.
So then what happens ought to be something like the following:
- Camera is plugged in, detected, and kernel module is fired up. Then
either
2a. A streaming app is started. Then, upon request from outside the kernel, the /dev/videoX is locked in and /dev/camX is locked out. The camera streams until told to quit streaming, and in the meantime any access to /dev/camX is not permitted. When the streaming is turned off, the lock is released.
or
2b. A stillcam app is started. Then similar to 2a, but the locking is reversed.
I think that this kind of thing would keep life simple. As I understand what Hans is envisioning, it is pretty much along the same lines, too. It would mean, of course, that the way that libgphoto2 would access one of these cameras would be directly to access the /dev/camX provided by the kernel, and not to use libusb. But that can be done, I think. As I mentioned before, Hans has written several libgphoto2 drivers for digital picture frames which are otherwise seen as USB mass storage devices. Something similar would have to be done with dual-mode cameras.
I will send a second reply to this message, which deals in particular with the list of abilities you outlined above. The point is, the situation as to that list of abilities is more chaotic than is generally realized. And when people are laying plans they really need to be aware of that.
From what I understood from your proposal, "/dev/camX" would be providing a
libusb-like interface, right?
If so, then, I'd say that we should just use the current libusb infrastructure. All we need is a way to lock libusb access when another driver is using the same USB interface.
Hans and Adam's proposal is to actually create a "/dev/camX" node that will give fs-like access to the pictures. As the data access to the cameras generally use PTP (or a PTP-like protocol), probably one driver will handle several different types of cameras, so, we'll end by having one different driver for PTP than the V4L driver.
In other words, part of libgphoto2 code will be moved into the Kernel, to allow abstracting the webcam differences into a common interface.
In summary, there are currently two proposals:
a resource lock for USB interface between V4L and libusb;
a PTP-like USB driver, plus a resource lock between V4L and the PTP-like driver.
The same resource lock may also be implemented at libusb, in order to avoid concurrency.
As you said that streaming on some cameras may delete all pictures from it, I suspect that (2) is the best alternative.
Thanks, Mauro
Mauro,
In fact none of the currently known and supported cameras are using PTP. All of them are proprietary. They have a rather intimidating set of differences in functionality, too. Namely, some of them have an isochronous endpoint, and some of them rely exclusively upon bulk transport. Some of them have a well developed set of internal capabilities as far as handling still photos are concerned. I mean, such things as the ability to download a single photo, selected at random from the set of photos on the camera, and some do not, requiring that the "ability" to do this is emulated in software -- by first downloading all previously listed photos and sending the data to /dev/null, then downloading the desired photo and saving it. Some of them permit deletion of individual photos, or all photos, and some do not. For some of them it is even true, as I have previously mentioned, that the USB command string which will delete all photos is the same command used for starting the camera in streaming mode.
But the point here is that these cameras are all different from one another, depending upon chipset and even, sometimes, upon firmware or chipset version. The still camera abilities and limitations of all of them are pretty much worked out in libgphoto2. My suggestion would be that the libgphoto2 support libraries for these cameras ought to be left the hell alone, except for some changes in, for example, how the camera is accessed in the first place (through libusb or through a kernel device) in order to address adequately the need to support both modes. I know what is in those libgphoto2 drivers because I wrote them. I can definitely promise that to move all of that functionality over into kernel modules would be a nightmare and would moreover greatly contribute to kernel bloat. You really don't want to go there.
As to whether to use libusb or not to use libusb:
It would be very nice to be able to keep using libusb to get access to these cameras, as then no change in the existing stillcam drivers would be required at all. Furthermore, if it were possible to solve all of the associated locking problems and to do it this way, it would be something that could be generalized to any analogous situation.
This would be very nice. I can also imagine, of course, that such an approach might require changes in libusb. For example, the current ability of libusb itself to switch off a kernel device might possibly be a step in the wrong direction, and it might possibly be needed to move that function, somehow, out of libusb and into the kernel support for affected hardware.
In the alternative, it ought to be possible for a libgphoto2 driver to hook up directly to a kernel-created device without going through libusb, and, as I have said in earlier messages, some of our driver code (for digital picture frames in particular) does just that. Then, whatever /dev entries and associated locking problems are needed could be handled by the kernel, and libgphoto2 talks to the device. But if things are done this way I strongly suggest that as little of the internals of the libgphoto2 driver are put in the kernel as it is possible to do. Be very economical about that, else there will be a big mess.
Doing an specific libusb-like approach just for those cams seems to be the wrong direction, as such driver would be just a fork of an already existing code. I'm all against duplicating it.
So, either we need to move the code from libgphoto2 to kernel or work into an approach that will make libusb to return -EBUSY when a driver like V4L is in usage, and vice-versa.
I never took a look on how libusb works. It seems that the logic for it is at drivers/usb/core/devio.c. Assuming that this is correct driver for libusb, the locking patch would be similar to the enclosed one.
Of course, more work will be needed. For example, in the specific case of devices where starting stream will clean the memory data, the V4L driver will need some additional logic to detect if the memory is filled and not allowing stream (or requiring CAP_SYS_ADMIN, returning -EPERM otherwise).
Thanks, Mauro
-
Add a hardware resource locking schema at the Kernel
Sometimes, a hardware resource is used by more than one device driver. This causes troubles, as sometimes, using the resource by one driver is mutually exclusive than using the same resource by another driver.
Adds a resource locking schema that will avoid such troubles.
TODO: This is just a quick hack prototyping the a real solution. The namespace there is not consistent, nor the actual code that locks the resource is provided.
NOTE: As the problem also happens with some PCI devices, instead of adding such locking schema at usb_device, it seems better to bind whatever solution into struct device.
commit 7e4bd0a65c4b2f71157f42ce89ecd7df69480a4b Author: Mauro Carvalho Chehab mchehab@redhat.com Date: Mon Aug 8 15:26:50 2011 -0300
Add a hardware resource locking schema at the Kernel
Sometimes, a hardware resource is used by more than one device driver. This causes troubles, as sometimes, using the resource by one driver is mutually exclusive than using the same resource by another driver.
Adds a resource locking schema that will avoid such troubles.
TODO: This is just a quick hack prototyping the a real solution. The namespace there is not consistent, nor the actual code that locks the resource is provided.
NOTE: As the problem also happens with some PCI devices, instead of adding such locking schema at usb_device, it seems better to bind whatever solution into struct device.
Signed-off-by: Mauro Carvalho Chehab mchehab@redhat.com
diff --git a/include/linux/resourcelock.h b/include/linux/resourcelock.h new file mode 100644 index 0000000..fc7238c --- /dev/null +++ b/include/linux/resourcelock.h @@ -0,0 +1,27 @@ +#include <linux/device.h> + +/** + * enum hw_resources - type of resource to lock + * LOCK_DEVICE - The complete device should be locked with exclusive access + * + * TODO: Add other types of resource locking, for example, to lock just a + * tuner, or an I2C bus + */ + +enum hw_resources { + LOCK_DEVICE, +}; + +static int get_resource_lock(struct device dev, enum hw_resources hw_rec) { + /* + * TODO: implement the actual code for the function, returning + * -EBUSY if somebody else already allocated the needed resource + */ + return 0; +} + +static void put_resource_lock(struct device dev, enum hw_resources hw_rec) { + /* + * TODO: implrement a function to release the resource + */ +} diff --git a/drivers/media/video/gspca/gspca.c b/drivers/media/video/gspca/gspca.c index 5da4879..d8da757 100644 --- a/drivers/media/video/gspca/gspca.c +++ b/drivers/media/video/gspca/gspca.c @@ -35,6 +35,7 @@ #include <asm/page.h> #include <linux/uaccess.h> #include <linux/ktime.h> +#include <linux/resourcelock.h> #include <media/v4l2-ioctl.h>
#include "gspca.h" @@ -1218,6 +1219,7 @@ static void gspca_release(struct video_device *vfd) static int dev_open(struct file *file) { struct gspca_dev *gspca_dev; + int ret;
PDEBUG(D_STREAM, "[%s] open", current->comm); gspca_dev = (struct gspca_dev *) video_devdata(file); @@ -1228,6 +1230,10 @@ static int dev_open(struct file *file) if (!try_module_get(gspca_dev->module)) return -ENODEV;
+ ret = get_resource_lock(gspca_dev->dev->dev, LOCK_DEVICE); + if (ret) + return ret; + file->private_data = gspca_dev; #ifdef GSPCA_DEBUG /* activate the v4l2 debug */ @@ -1260,6 +1266,9 @@ static int dev_close(struct file *file) frame_free(gspca_dev); } file->private_data = NULL; + + put_resource_lock(gspca_dev->dev->dev, LOCK_DEVICE); + module_put(gspca_dev->module); mutex_unlock(&gspca_dev->queue_lock);
diff --git a/drivers/usb/core/devio.c b/drivers/usb/core/devio.c index 37518df..f94a6d5 100644 --- a/drivers/usb/core/devio.c +++ b/drivers/usb/core/devio.c @@ -49,6 +49,7 @@ #include <asm/uaccess.h> #include <asm/byteorder.h> #include <linux/moduleparam.h> +#include <linux/resourcelock.h>
#include "usb.h"
@@ -693,6 +694,10 @@ static int usbdev_open(struct inode *inode, struct file *file) if (dev->state == USB_STATE_NOTATTACHED) goto out_unlock_device;
+ ret = get_resource_lock(dev->dev, LOCK_DEVICE); + if (ret) + goto out_unlock_device; + ret = usb_autoresume_device(dev); if (ret) goto out_unlock_device; @@ -747,6 +752,7 @@ static int usbdev_release(struct inode *inode, struct file *file) destroy_all_async(ps); usb_autosuspend_device(dev); usb_unlock_device(dev); + put_resource_lock(dev->dev, LOCK_DEVICE); usb_put_dev(dev); put_pid(ps->disc_pid);
On Mon, 8 Aug 2011, Mauro Carvalho Chehab wrote:
Em 08-08-2011 14:39, Theodore Kilgore escreveu:
On Mon, 8 Aug 2011, Mauro Carvalho Chehab wrote:
Em 07-08-2011 23:26, Theodore Kilgore escreveu:
(first of two replies to Adam's message; second reply deals with other topics)
In summary, there are currently two proposals:
a resource lock for USB interface between V4L and libusb;
a PTP-like USB driver, plus a resource lock between V4L and the PTP-like driver.
The same resource lock may also be implemented at libusb, in order to avoid concurrency.
As you said that streaming on some cameras may delete all pictures from it, I suspect that (2) is the best alternative.
Thanks, Mauro
Mauro,
In fact none of the currently known and supported cameras are using PTP. All of them are proprietary. They have a rather intimidating set of differences in functionality, too. Namely, some of them have an isochronous endpoint, and some of them rely exclusively upon bulk transport. Some of them have a well developed set of internal capabilities as far as handling still photos are concerned. I mean, such things as the ability to download a single photo, selected at random from the set of photos on the camera, and some do not, requiring that the "ability" to do this is emulated in software -- by first downloading all previously listed photos and sending the data to /dev/null, then downloading the desired photo and saving it. Some of them permit deletion of individual photos, or all photos, and some do not. For some of them it is even true, as I have previously mentioned, that the USB command string which will delete all photos is the same command used for starting the camera in streaming mode.
But the point here is that these cameras are all different from one another, depending upon chipset and even, sometimes, upon firmware or chipset version. The still camera abilities and limitations of all of them are pretty much worked out in libgphoto2. My suggestion would be that the libgphoto2 support libraries for these cameras ought to be left the hell alone, except for some changes in, for example, how the camera is accessed in the first place (through libusb or through a kernel device) in order to address adequately the need to support both modes. I know what is in those libgphoto2 drivers because I wrote them. I can definitely promise that to move all of that functionality over into kernel modules would be a nightmare and would moreover greatly contribute to kernel bloat. You really don't want to go there.
As to whether to use libusb or not to use libusb:
It would be very nice to be able to keep using libusb to get access to these cameras, as then no change in the existing stillcam drivers would be required at all. Furthermore, if it were possible to solve all of the associated locking problems and to do it this way, it would be something that could be generalized to any analogous situation.
This would be very nice. I can also imagine, of course, that such an approach might require changes in libusb. For example, the current ability of libusb itself to switch off a kernel device might possibly be a step in the wrong direction, and it might possibly be needed to move that function, somehow, out of libusb and into the kernel support for affected hardware.
In the alternative, it ought to be possible for a libgphoto2 driver to hook up directly to a kernel-created device without going through libusb, and, as I have said in earlier messages, some of our driver code (for digital picture frames in particular) does just that. Then, whatever /dev entries and associated locking problems are needed could be handled by the kernel, and libgphoto2 talks to the device. But if things are done this way I strongly suggest that as little of the internals of the libgphoto2 driver are put in the kernel as it is possible to do. Be very economical about that, else there will be a big mess.
Doing an specific libusb-like approach just for those cams seems to be the wrong direction, as such driver would be just a fork of an already existing code. I'm all against duplicating it.
Well, in practice the "fork" would presumably be carried out by yours truly. Presumably with the advice and help of concerned parties. too. Since I am involved on both the kernel side and the libgphoto2 side of the support for the same cameras, it would certainly shorten the lines of communication at the very least. Therefore this is not infeasible.
So, either we need to move the code from libgphoto2 to kernel
As I said, I think you don't want to do that.
or work into
an approach that will make libusb
(or an appropriate substitute)
to return -EBUSY when a driver like V4L
is in usage, and vice-versa.
I never took a look on how libusb works. It seems that the logic for it is at drivers/usb/core/devio.c. Assuming that this is correct driver for libusb, the locking patch would be similar to the enclosed one.
Of course, more work will be needed. For example, in the specific case of devices where starting stream will clean the memory data, the V4L driver will need some additional logic to detect if the memory is filled and not allowing stream (or requiring CAP_SYS_ADMIN, returning -EPERM otherwise).
Yes, this is probably a good idea in any event. As far as I know, this would affect just one kernel driver. A complication is that it is only some of the cameras supported by that driver, and they need to be detected.
Thanks, Mauro
Add a hardware resource locking schema at the Kernel
Sometimes, a hardware resource is used by more than one device driver. This causes troubles, as sometimes, using the resource by one driver is mutually exclusive than using the same resource by another driver.
Adds a resource locking schema that will avoid such troubles.
TODO: This is just a quick hack prototyping the a real solution. The namespace there is not consistent, nor the actual code that locks the resource is provided.
NOTE: As the problem also happens with some PCI devices, instead of adding such locking schema at usb_device, it seems better to bind whatever solution into struct device.
Interesting comment.
commit 7e4bd0a65c4b2f71157f42ce89ecd7df69480a4b Author: Mauro Carvalho Chehab mchehab@redhat.com Date: Mon Aug 8 15:26:50 2011 -0300
Add a hardware resource locking schema at the Kernel Sometimes, a hardware resource is used by more than one device driver. This causes troubles, as sometimes, using the resource by one driver is mutually exclusive than using the same resource by another driver. Adds a resource locking schema that will avoid such troubles. TODO: This is just a quick hack prototyping the a real solution. The namespace there is not consistent, nor the actual code that locks the resource is provided. NOTE: As the problem also happens with some PCI devices, instead of adding such locking schema at usb_device, it seems better to bind whatever solution into struct device. Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
diff --git a/include/linux/resourcelock.h b/include/linux/resourcelock.h new file mode 100644 index 0000000..fc7238c --- /dev/null +++ b/include/linux/resourcelock.h @@ -0,0 +1,27 @@ +#include <linux/device.h>
+/**
- enum hw_resources - type of resource to lock
- LOCK_DEVICE - The complete device should be locked with exclusive access
- TODO: Add other types of resource locking, for example, to lock just a
- tuner, or an I2C bus
- */
+enum hw_resources {
- LOCK_DEVICE,
+};
+static int get_resource_lock(struct device dev, enum hw_resources hw_rec) {
- /*
* TODO: implement the actual code for the function, returning
* -EBUSY if somebody else already allocated the needed resource
*/
- return 0;
+}
+static void put_resource_lock(struct device dev, enum hw_resources hw_rec) {
- /*
* TODO: implrement a function to release the resource
*/
+} diff --git a/drivers/media/video/gspca/gspca.c b/drivers/media/video/gspca/gspca.c index 5da4879..d8da757 100644 --- a/drivers/media/video/gspca/gspca.c +++ b/drivers/media/video/gspca/gspca.c @@ -35,6 +35,7 @@ #include <asm/page.h> #include <linux/uaccess.h> #include <linux/ktime.h> +#include <linux/resourcelock.h> #include <media/v4l2-ioctl.h>
#include "gspca.h" @@ -1218,6 +1219,7 @@ static void gspca_release(struct video_device *vfd) static int dev_open(struct file *file) { struct gspca_dev *gspca_dev;
int ret;
PDEBUG(D_STREAM, "[%s] open", current->comm); gspca_dev = (struct gspca_dev *) video_devdata(file);
@@ -1228,6 +1230,10 @@ static int dev_open(struct file *file) if (!try_module_get(gspca_dev->module)) return -ENODEV;
- ret = get_resource_lock(gspca_dev->dev->dev, LOCK_DEVICE);
- if (ret)
return ret;
- file->private_data = gspca_dev;
#ifdef GSPCA_DEBUG /* activate the v4l2 debug */ @@ -1260,6 +1266,9 @@ static int dev_close(struct file *file) frame_free(gspca_dev); } file->private_data = NULL;
- put_resource_lock(gspca_dev->dev->dev, LOCK_DEVICE);
- module_put(gspca_dev->module); mutex_unlock(&gspca_dev->queue_lock);
diff --git a/drivers/usb/core/devio.c b/drivers/usb/core/devio.c index 37518df..f94a6d5 100644 --- a/drivers/usb/core/devio.c +++ b/drivers/usb/core/devio.c @@ -49,6 +49,7 @@ #include <asm/uaccess.h> #include <asm/byteorder.h> #include <linux/moduleparam.h> +#include <linux/resourcelock.h>
#include "usb.h"
@@ -693,6 +694,10 @@ static int usbdev_open(struct inode *inode, struct file *file) if (dev->state == USB_STATE_NOTATTACHED) goto out_unlock_device;
- ret = get_resource_lock(dev->dev, LOCK_DEVICE);
- if (ret)
goto out_unlock_device;
- ret = usb_autoresume_device(dev); if (ret) goto out_unlock_device;
@@ -747,6 +752,7 @@ static int usbdev_release(struct inode *inode, struct file *file) destroy_all_async(ps); usb_autosuspend_device(dev); usb_unlock_device(dev);
- put_resource_lock(dev->dev, LOCK_DEVICE); usb_put_dev(dev); put_pid(ps->disc_pid);
-- To unsubscribe from this list: send the line "unsubscribe linux-media" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Em 08-08-2011 16:32, Theodore Kilgore escreveu:
Doing an specific libusb-like approach just for those cams seems to be the wrong direction, as such driver would be just a fork of an already existing code. I'm all against duplicating it.
Well, in practice the "fork" would presumably be carried out by yours truly. Presumably with the advice and help of concerned parties. too. Since I am involved on both the kernel side and the libgphoto2 side of the support for the same cameras, it would certainly shorten the lines of communication at the very least. Therefore this is not infeasible.
Forking the code just because we have something "special" is the wrong thing to do (TM). I would not like to fork V4L core code due to some special need, but instead to add some glue there to cover the extra case. Maintaining a fork is bad in long term, as the same fixes/changes will likely be needed on both copies.
Adding some sort of resource locking like the example I've pointed seems easy and will work just fine.
So, either we need to move the code from libgphoto2 to kernel
As I said, I think you don't want to do that.
I don't have a strong opinion about that ATM. Both approaches have advantages and disadvantages.
or work into
an approach that will make libusb
(or an appropriate substitute)
Something like what Hans proposed makes sense on my eyes, as an appropriate substitute, but it seems that this is exactly what you don't want. I can really see two alternatives there:
1) keep the libusb API, e. g. the driver for data access is on userspace, and a char device allows to communicate with USB in a transparent way; 2) create an API for camera, like Hans/Adam proposal.
If we take (1), we should just use the already existing Kernel infrastructure, plus a resource locking, to put the USB device into "exclusive access" mode.
to return -EBUSY when a driver like V4L
is in usage, and vice-versa.
I never took a look on how libusb works. It seems that the logic for it is at drivers/usb/core/devio.c. Assuming that this is correct driver for libusb, the locking patch would be similar to the enclosed one.
Of course, more work will be needed. For example, in the specific case of devices where starting stream will clean the memory data, the V4L driver will need some additional logic to detect if the memory is filled and not allowing stream (or requiring CAP_SYS_ADMIN, returning -EPERM otherwise).
Yes, this is probably a good idea in any event.
Agreed.
As far as I know, this would affect just one kernel driver. A complication is that it is only some of the cameras supported by that driver, and they need to be detected.
Yes.
NOTE: As the problem also happens with some PCI devices, instead of adding such locking schema at usb_device, it seems better to bind whatever solution into struct device.
Interesting comment.
The problem with PCI devices is not exactly the same, but I tried to think on a way that could also work for those issues. Eventually, when actually implementing the code, we may come to a conclusion that this is the right thing to do, or to decide to address those cases with a different solution.
The issue we have (and that it is bus-agnostic), is that some resources depend on or are mutually exclusive of another resource.
For example, considering a single-tuner device that supports both analog and digital TV. Due to the analog TV, such device needs to have an ALSA module.
However, accessing the ALSA input depends on having the hardware in analog mode, as, on almost all supported hardware, there's no MPEG decoder inside it. So, accessing the alsa device should return -EBUSY, if the device is on digital mode.
On the other hand, as the device has just one tuner, the digital mode driver can't be used simultaneously with the analog mode one.
So, what I'm seeing is that we need some kernel way to describe hardware resource dependencies.
Regards, Mauro
On Monday 08 August 2011, Mauro Carvalho Chehab wrote:
Well, in practice the "fork" would presumably be carried out by yours truly. Presumably with the advice and help of concerned parties. too. Since I am involved on both the kernel side and the libgphoto2 side of the support for the same cameras, it would certainly shorten the lines of communication at the very least. Therefore this is not infeasible.
Forking the code just because we have something "special" is the wrong thing to do (TM). I would not like to fork V4L core code due to some special need, but instead to add some glue there to cover the extra case. Maintaining a fork is bad in long term, as the same fixes/changes will likely be needed on both copies.
Unfortunately there is some difficulty with libusb in that respect. libgphoto relies upon libusb-0.1 becuase it is cross platform and Win32 support in libusb-1.0 is only just being integrated. The libusb developers consider the libusb-0.1 API frozen and are not willing to extend it to address our problem. libusb doesn't expose the file descriptor it uses to talk to the underlying device so it is hard to extend the interface without forking libusb (The best hope I can think of at the moment is to get the distros to accept a patch for it to add the extra required API call(s) and for libgphoto to use the extra features in that patch if it detects it is supported at compile time).
Adam Baker
On Mon, 8 Aug 2011, Adam Baker wrote:
On Monday 08 August 2011, Mauro Carvalho Chehab wrote:
Well, in practice the "fork" would presumably be carried out by yours truly. Presumably with the advice and help of concerned parties. too. Since I am involved on both the kernel side and the libgphoto2 side of the support for the same cameras, it would certainly shorten the lines of communication at the very least. Therefore this is not infeasible.
Forking the code just because we have something "special" is the wrong thing to do (TM). I would not like to fork V4L core code due to some special need, but instead to add some glue there to cover the extra case. Maintaining a fork is bad in long term, as the same fixes/changes will likely be needed on both copies.
Unfortunately there is some difficulty with libusb in that respect. libgphoto relies upon libusb-0.1 becuase it is cross platform and Win32 support in libusb-1.0 is only just being integrated. The libusb developers consider the libusb-0.1 API frozen and are not willing to extend it to address our problem. libusb doesn't expose the file descriptor it uses to talk to the underlying device so it is hard to extend the interface without forking libusb (The best hope I can think of at the moment is to get the distros to accept a patch for it to add the extra required API call(s) and for libgphoto to use the extra features in that patch if it detects it is supported at compile time).
Adam,
Yes, you are quite correct about this. I was just on the way out of the house and remembered that this problem exists, decided to re-connect and add this point to the witches' brew that we are working on.
What struck me was not the Windows support, though, it was the Mac support. And a number of people run Gphoto stuff on Mac, too. That just reinforces your point, of course. Gphoto is explicitly cross-platform. It is developed on Linux but it is supposed to compile on anyone's C compiler and run on any hardware platform or operating system which has available the minimal support require to make it work.
You are right. We, basically, can not screw with the internals of libgphoto2. At the outside, one can not go to the point where any changes would break the support for other platforms.
Theodore Kilgore
Hi,
On 08/08/2011 07:39 PM, Theodore Kilgore wrote:
On Mon, 8 Aug 2011, Mauro Carvalho Chehab wrote:
<snip>
Mauro,
In fact none of the currently known and supported cameras are using PTP. All of them are proprietary. They have a rather intimidating set of differences in functionality, too. Namely, some of them have an isochronous endpoint, and some of them rely exclusively upon bulk transport. Some of them have a well developed set of internal capabilities as far as handling still photos are concerned. I mean, such things as the ability to download a single photo, selected at random from the set of photos on the camera, and some do not, requiring that the "ability" to do this is emulated in software -- by first downloading all previously listed photos and sending the data to /dev/null, then downloading the desired photo and saving it. Some of them permit deletion of individual photos, or all photos, and some do not. For some of them it is even true, as I have previously mentioned, that the USB command string which will delete all photos is the same command used for starting the camera in streaming mode.
But the point here is that these cameras are all different from one another, depending upon chipset and even, sometimes, upon firmware or chipset version. The still camera abilities and limitations of all of them are pretty much worked out in libgphoto2. My suggestion would be that the libgphoto2 support libraries for these cameras ought to be left the hell alone, except for some changes in, for example, how the camera is accessed in the first place (through libusb or through a kernel device) in order to address adequately the need to support both modes. I know what is in those libgphoto2 drivers because I wrote them. I can definitely promise that to move all of that functionality over into kernel modules would be a nightmare and would moreover greatly contribute to kernel bloat. You really don't want to go there.
I strongly disagree with this. The libgphoto2 camlibs (drivers) for these cameras handle a number of different tasks:
1) Talking to the camera getting binary blobs out of them (be it a PAT or some data) 2) Interpreting said blobs 3) Converting the data parts to pictures doing post processing, etc.
I'm not suggesting to move all of this to the kernel driver, we just need to move part 1. to the kernel driver. This is not rocket science.
We currently have a really bad situation were drivers are fighting for the same device. The problem here is that these devices are not only one device on the physical level, but also one device on the logical level (IOW they have only 1 usb interface).
It is time to quit thinking in band-aides and solve this properly, 1 logical device means it gets 1 driver.
This may be an approach which means some more work then others, but I believe in the end that doing it right is worth the effort.
As for Mauro's resource locking patches, these won't work because the assume both drivers are active at the same time, which is simply not true. Only 1 driver can be bound to the interface at a time, and when switching from the gspca driver to the usbfs driver, gspca will see an unplug which is indistinguishable from a real device unplug.
More over a kernel only solution without libgphoto changes won't solve the problem of a libgphoto app keeping the device open locking out streaming.
Regards,
Hans
On Tue, 9 Aug 2011, Hans de Goede wrote:
Hi,
On 08/08/2011 07:39 PM, Theodore Kilgore wrote:
On Mon, 8 Aug 2011, Mauro Carvalho Chehab wrote:
<snip>
Mauro,
In fact none of the currently known and supported cameras are using PTP. All of them are proprietary. They have a rather intimidating set of differences in functionality, too. Namely, some of them have an isochronous endpoint, and some of them rely exclusively upon bulk transport. Some of them have a well developed set of internal capabilities as far as handling still photos are concerned. I mean, such things as the ability to download a single photo, selected at random from the set of photos on the camera, and some do not, requiring that the "ability" to do this is emulated in software -- by first downloading all previously listed photos and sending the data to /dev/null, then downloading the desired photo and saving it. Some of them permit deletion of individual photos, or all photos, and some do not. For some of them it is even true, as I have previously mentioned, that the USB command string which will delete all photos is the same command used for starting the camera in streaming mode.
But the point here is that these cameras are all different from one another, depending upon chipset and even, sometimes, upon firmware or chipset version. The still camera abilities and limitations of all of them are pretty much worked out in libgphoto2. My suggestion would be that the libgphoto2 support libraries for these cameras ought to be left the hell alone, except for some changes in, for example, how the camera is accessed in the first place (through libusb or through a kernel device) in order to address adequately the need to support both modes. I know what is in those libgphoto2 drivers because I wrote them. I can definitely promise that to move all of that functionality over into kernel modules would be a nightmare and would moreover greatly contribute to kernel bloat. You really don't want to go there.
I strongly disagree with this. The libgphoto2 camlibs (drivers) for these cameras handle a number of different tasks:
- Talking to the camera getting binary blobs out of them (be it a PAT or some data)
- Interpreting said blobs
- Converting the data parts to pictures doing post processing, etc.
I'm not suggesting to move all of this to the kernel driver, we just need to move part 1. to the kernel driver.
I did not assume otherwise.
This is not rocket science.
No, but both Adam and I realized, approximately at the same time yesterday afternoon, something which is rather important here. Gphoto is not developed exclusively for Linux. Furthermore, it has a significant user base both on Windows and on MacOS, not to mention BSD. It really isn't nice to be screwing around too much with the way it works.
We currently have a really bad situation were drivers are fighting for the same device. The problem here is that these devices are not only one device on the physical level, but also one device on the logical level (IOW they have only 1 usb interface).
All true. Which is why I brought the topic up for discussion in the first place and why it now gets on the program of the USB Summit.
It is time to quit thinking in band-aides and solve this properly, 1 logical device means it gets 1 driver.
This may be an approach which means some more work then others, but I believe in the end that doing it right is worth the effort.
Clearly, we agree about "doing it right is worth the effort." The whole discussion right now is about what is "right."
As for Mauro's resource locking patches, these won't work because the assume both drivers are active at the same time, which is simply not true. Only 1 driver can be bound to the interface at a time, and when switching from the gspca driver to the usbfs driver, gspca will see an unplug which is indistinguishable from a real device unplug.
Things would not have to happen so, of course. Things did not used to happen so. Presence of kernel support for streaming used to block stillcam access through libusb. Period. End of discussion. The code change in libusb which changes that default behavior is quite recent. It was done because the kernel was *not* addressing the problem at all. That change could presumably be reversed if it were decided that the kernel is going to do the work instead.
A POV could be defended, that this behavior of libusb was put in as a stopgap measure because the kernel was not doing its job. In which case the right thing to do is to put the missing functionality into the kernel drivers and take out from libusb the attempt to provide it, when libusb really can't do the job completely.
More over a kernel only solution without libgphoto changes won't solve the problem of a libgphoto app keeping the device open locking out streaming.
Eh? You really lose me with this one. If the camera is streaming then clearly any attempt to do stillcam stuff needs to be blocked. If stillcam stuff is being done then streaming needs to be blocked. Sauce for the goose is sauce for the gander. You seem to be saying that one of these activities takes priority over the other. Why?
Theodore Kilgore
Hi,
On 08/09/2011 07:10 PM, Theodore Kilgore wrote:
On Tue, 9 Aug 2011, Hans de Goede wrote:
<snip>
No, but both Adam and I realized, approximately at the same time yesterday afternoon, something which is rather important here. Gphoto is not developed exclusively for Linux. Furthermore, it has a significant user base both on Windows and on MacOS, not to mention BSD. It really isn't nice to be screwing around too much with the way it works.
Right, so my plan is not to rip out the existing camlibs from libgphoto2, but to instead add a new camlib which talks to /dev/video# nodes which support the new to be defined v4l2 API for this. This camlib will then take precedence over the old libusb based ones when running on a system which has a new enough kernel. On systems without the new enough kernel the matching portdriver won't find any ports, so the camlib will be effectively disabled. On BSD the port driver for this new /dev/video# API and the camlib won't even get compiled.
<snip>
It is time to quit thinking in band-aides and solve this properly, 1 logical device means it gets 1 driver.
This may be an approach which means some more work then others, but I believe in the end that doing it right is worth the effort.
Clearly, we agree about "doing it right is worth the effort." The whole discussion right now is about what is "right."
I'm sorry but I don't get the feeling that the discussion currently is focusing on what is "right". To me too much attention is being spend on not throwing away the effort put in the current libgphoto2 camlibs, which I don't like for 2 reasons: 1) It distracts from doing what is right 2) It ignores the fact that a lot has been learned in doing those camlibs, really really a lot. and all that can be re-used in a kernel driver.
Let me try to phrase it in a way I think you'll understand. If we agree on doing it right over all other things (such as the fact that doing it right may take a considerable effort). Then this could be an interesting assignment for some of the computer science students I used to be a lecturer for. This assignment could read something like "Given the existing situation and knowledge < describe all that here>, do a re-design for the driverstack for these dual mode cameras, assuming a completely fresh start".
Now if I were to give this assignment to a group of students, and they would keep coming back with the "but re-doing the camlibs in kernelspace is such a large effort, and we already have them in userspace" argument against using one unified driver for these devices, I would give them an F, because they are clearly missing the "assuming a completely fresh start" part of the assignment.
I'm sorry if this sounds a bit harsh, but this is the way how the current discussion feels to me. If we agree on aiming for "doing it right" then with that comes to me doing a software design from scratch, so without taking into account what is already there.
There are of course limits to the from scratch part, in the end we want this to slot into the existing Linux practices for webcams and stillcams, which means: 1) offering a v4l2 /dev/video# node for streaming; and 2) access to the pictures stored on the camera through libgphoto
Taking these 2 constrictions into account, and combining that with my firm believe that the solution to all the device sharing problems is handling both functions in a single driver, I end up with only 1 option:
Have a kernel driver which provides both functions of the device, with the streaming exported as a standard v4l2 device, and the stillcam function exported with some to be defined API. Combined with a libgphoto2 portlib and camlib for this new API, so that existing libgphoto2 apps can still access the pictures as if nothing was changed.
Regards,
Hans
On Tue, 9 Aug 2011, Hans de Goede wrote:
Hi,
On 08/09/2011 07:10 PM, Theodore Kilgore wrote:
On Tue, 9 Aug 2011, Hans de Goede wrote:
<snip>
No, but both Adam and I realized, approximately at the same time yesterday afternoon, something which is rather important here. Gphoto is not developed exclusively for Linux. Furthermore, it has a significant user base both on Windows and on MacOS, not to mention BSD. It really isn't nice to be screwing around too much with the way it works.
Right, so my plan is not to rip out the existing camlibs from libgphoto2, but to instead add a new camlib which talks to /dev/video# nodes which support the new to be defined v4l2 API for this. This camlib will then take precedence over the old libusb based ones when running on a system which has a new enough kernel. On systems without the new enough kernel the matching portdriver won't find any ports, so the camlib will be effectively disabled.
And then, I assume you mean, the old camlib will still work.
On BSD the port driver for this new /dev/video#
API and the camlib won't even get compiled.
<snip>
It is time to quit thinking in band-aides and solve this properly, 1 logical device means it gets 1 driver.
This may be an approach which means some more work then others, but I believe in the end that doing it right is worth the effort.
Clearly, we agree about "doing it right is worth the effort." The whole discussion right now is about what is "right."
I'm sorry but I don't get the feeling that the discussion currently is focusing on what is "right".
You are very impatient.
To me too much attention is being spend on not throwing away the effort put in the current libgphoto2 camlibs, which I don't like for 2 reasons:
- It distracts from doing what is right
- It ignores the fact that a lot has been learned in doing those
camlibs, really really a lot. and all that can be re-used in a kernel driver.
Note that your two items can contradict or cancel each other out if one is not careful?
Let me try to phrase it in a way I think you'll understand. If we agree on doing it right over all other things (such as the fact that doing it right may take a considerable effort). Then this could be an interesting assignment for some of the computer science students I used to be a lecturer for. This assignment could read something like "Given the existing situation and knowledge < describe all that here>, do a re-design for the driverstack for these dual mode cameras, assuming a completely fresh start".
Now if I were to give this assignment to a group of students, and they would keep coming back with the "but re-doing the camlibs in kernelspace is such a large effort, and we already have them in userspace" argument against using one unified driver for these devices, I would give them an F, because they are clearly missing the "assuming a completely fresh start" part of the assignment.
Well, for one thing, Hans, we do not have here any instructor who is giving us an assignment. And nobody is in the position to specify that the assignment says "assuming a completely fresh start" -- unless Linus happens to be reading this thread and chimes in. Otherwise, unless there is a convincing demonstration that "assuming a completely fresh start" is an absolute and unavoidable necessity, someone is probably going to disagree.
I'm sorry if this sounds a bit harsh,
Yes, I am sorry about that, too.
but this is the way how
the current discussion feels to me. If we agree on aiming for "doing it right" then with that comes to me doing a software design from scratch, so without taking into account what is already there.
Here, a counter-argument is to point out, as I did in a mail earlier this afternoon, that "without taking account what is already there" might possibly let one overlook something important. And, no, I am not referring to the userspace-kernelspace problem with this. I am referring to the fact that simply to dump the entire contents of the camera "into cache" (and to keep it there for quite a while) might not necessarily be a good idea and it had been quite consciously rejected to do that in the design of libgphoto2. Not because it is in userspace, but because to do that eats up and ties up RAM of which one cannot assume there is a surplus.
Do not misunderstand, though. I am not even going so far as to say that libgphoto2 made the right decision. It certainly has its drawbacks, in that it places severe requirements on someone programming a driver for a really stupid camera. But what I *am* saying is that the issue was anticipated, the issue was faced, and a conscious decision was made. This is the opposite of not anticipating, not facing an issue, and not making any conscious decision.
Oh, another example of such lack of deep thought has produced the current crisis, too. I am referring to the amazing decision of some user interface designers that an app for downloading still photos has to be fired up immediately, just as soon as the "still camera" is plugged in. I would really hate to be a passenger in a sailboat piloted by one of those guys. But, hey, nobody is perfect.
There are of course limits to the from scratch part, in the end we want this to slot into the existing Linux practices for webcams and stillcams, which means:
- offering a v4l2 /dev/video# node for streaming; and
- access to the pictures stored on the camera through libgphoto
Taking these 2 constrictions into account, and combining that with my firm believe that the solution to all the device sharing problems is handling both functions in a single driver, I end up with only 1 option:
Have a kernel driver which provides both functions of the device, with the streaming exported as a standard v4l2 device, and the stillcam function exported with some to be defined API. Combined with a libgphoto2 portlib and camlib for this new API, so that existing libgphoto2 apps can still access the pictures as if nothing was changed.
Well, what I _do_ think is that we need to agree about precisely what is supposed to work and what is not, in an operational sense. But we are still fuzzy about that. For example, you seemed to assert this morning that the webcam functionality needs to be able to preempt any running stillcam app and to grab the camera. Why? Or did I misunderstand you?
Then after we (and everybody else with an interest in the matter) have settled on precisely how the outcome is supposed to behave, we need to take a couple of test cases. Probably the best would be to, get some people to look at one driver and see if anything can be done to make that driver work better, using either Plan A or Plan B, or, for that matter, Plan C.
Theodore Kilgore
Hi,
On 08/10/2011 02:34 AM, Theodore Kilgore wrote:
<snip>
but this is the way how the current discussion feels to me. If we agree on aiming for "doing it right" then with that comes to me doing a software design from scratch, so without taking into account what is already there.
Here, a counter-argument is to point out, as I did in a mail earlier this afternoon, that "without taking account what is already there" might possibly let one overlook something important. And, no, I am not referring to the userspace-kernelspace problem with this. I am referring to the fact that simply to dump the entire contents of the camera "into cache" (and to keep it there for quite a while) might not necessarily be a good idea and it had been quite consciously rejected to do that in the design of libgphoto2. Not because it is in userspace, but because to do that eats up and ties up RAM of which one cannot assume there is a surplus.
This is an implementation detail which has little to do with the fundamental choice of whether or not we want 2 separate drivers or 1 single driver.
In part of the snipped message you called me impatient (no offense taken), my perceived impatience is stemming from what to me feels like we are dancing around the real issue here. The fundamental question is do we want 2 separate drivers or 1 single driver for these devices.
Lets answer that first, using all we've learned from the past. But without taking into account that one choice or the other will involve re-doing lots of code, as to me that is a poor argument from a technical pov.
<snip>
There are of course limits to the from scratch part, in the end we want this to slot into the existing Linux practices for webcams and stillcams, which means:
- offering a v4l2 /dev/video# node for streaming; and
- access to the pictures stored on the camera through libgphoto
Taking these 2 constrictions into account, and combining that with my firm believe that the solution to all the device sharing problems is handling both functions in a single driver, I end up with only 1 option:
Have a kernel driver which provides both functions of the device, with the streaming exported as a standard v4l2 device, and the stillcam function exported with some to be defined API. Combined with a libgphoto2 portlib and camlib for this new API, so that existing libgphoto2 apps can still access the pictures as if nothing was changed.
Well, what I _do_ think is that we need to agree about precisely what is supposed to work and what is not, in an operational sense. But we are still fuzzy about that. For example, you seemed to assert this morning that the webcam functionality needs to be able to preempt any running stillcam app and to grab the camera. Why? Or did I misunderstand you?
You've misunderstood me. We need to distinguish between an application having a tie to the device (so having a fd open) and the application doing an actual operation on the device.
No application should be able to pre-empt an ongoing operation by another application. Attempting an operation while another operation is ongoing should result in -EBUSY.
This differs significantly from what we've currently where: 1) There is no distinguishing going on between an app having a tie and an app actually doing an operation. Only one app can have a fd open
2) Some apps (userspace apps) can pre-empt other apps, taking away their fd and cancelling any ongoing operations
The above is what leads me to me still firm believe that having a single driver is the only solution. My reasoning is as follows
1) We cannot count on apps closing the fd when they have no immediate use for the device, iow open != in-use
2) Thus we need to allow both libgphoto2 and v4l2 apps to have the device open at the same time
3) When actual in-use (so an operation is ongoing) attempt by another apps to start an operation will result in -EBUSY
4) 2 + 3 can only be realized by having a single driver
Regards,
Hans
On Monday 08 August 2011, Mauro Carvalho Chehab wrote:
I will send a second reply to this message, which deals in particular with the list of abilities you outlined above. The point is, the situation as to that list of abilities is more chaotic than is generally realized. And when people are laying plans they really need to be aware of that.
From what I understood from your proposal, "/dev/camX" would be providing a libusb-like interface, right?
If so, then, I'd say that we should just use the current libusb infrastructure. All we need is a way to lock libusb access when another driver is using the same USB interface.
I think adding the required features to libusb is in general the correct approach however some locking may be needed in the kernel regardless to ensure a badly behaved libusb or libusb user can't corrupt kernel state.
Hans and Adam's proposal is to actually create a "/dev/camX" node that will give fs-like access to the pictures. As the data access to the cameras generally use PTP (or a PTP-like protocol), probably one driver will handle several different types of cameras, so, we'll end by having one different driver for PTP than the V4L driver.
I'm not advocating this approach, my post was intended as a "straw man" to allow the advantages and disadvantages of such an approach to be considered by all concerned. I suspected it would be excessively complex but I don't know enough about the various cameras to be certain.
Adam
On Mon, 8 Aug 2011, Adam Baker wrote:
On Monday 08 August 2011, Mauro Carvalho Chehab wrote:
I will send a second reply to this message, which deals in particular with the list of abilities you outlined above. The point is, the situation as to that list of abilities is more chaotic than is generally realized. And when people are laying plans they really need to be aware of that.
From what I understood from your proposal, "/dev/camX" would be providing a libusb-like interface, right?
If so, then, I'd say that we should just use the current libusb infrastructure. All we need is a way to lock libusb access when another driver is using the same USB interface.
I think adding the required features to libusb is in general the correct approach however some locking may be needed in the kernel regardless to ensure a badly behaved libusb or libusb user can't corrupt kernel state.
Hans and Adam's proposal is to actually create a "/dev/camX" node that will give fs-like access to the pictures. As the data access to the cameras generally use PTP (or a PTP-like protocol), probably one driver will handle several different types of cameras, so, we'll end by having one different driver for PTP than the V4L driver.
I'm not advocating this approach, my post was intended as a "straw man" to allow the advantages and disadvantages of such an approach to be considered by all concerned. I suspected it would be excessively complex but I don't know enough about the various cameras to be certain.
Fair enough. Go and have a look at the code in the various subdirectories of libgphoto2/camlibs, and you will see. Also consider that some of those subdirectories do not support currently-supported dual-mode cameras, but some of the ways of doing things that are present there could be applied to any dual-mode camera in the future.
A prime example of what I mean can be seen in camlibs/aox. Those cameras are very old now and they probably will never be fully supported. They can download plain bitmap photos, or they can use some kind of compression which is not figured out. They can, as I recall, be run as webcams, too, and then they will presumably use that weird compression. But what is immediately interesting is that in still mode there is no allocation table, or at least none is downloaded. Everything about how many images and what kind of images and what size are they can be read out of a downloaded allocation table on most cameras, but not on these. No. One has to send a sequence of commands and parse the responses to them in order to get the information.
I merely mention this example in order to point up the actual complexity of the situation, and the necessity not to make sweeping assumptions about how the camera is supposed to work. Be assured, that already happened when Gphoto was set up, and it made some of these cameras rather hard to support. Why? Well, it was set up with the assumption that all still cameras will do X, and Y, and Z. But be assured that someone either has or will design a still camera which is not capable of doing X, nor Y, nor Z, nor, even, all three of them, at least not in the way envisioned in someone's grand design.
OK, another example. The cameras supported in camlibs/jl2005c do not have webcam ability, but someone could at any time design and market a dualmode which has in stillcam mode the same severe limitation. What limitation? Well, the entire memory of the camera must be dumped, or else the camera jams itself. You can stop dumping in the middle of the operation, but you must continue after that. Suppose that you had ten pictures on the camera and you only wanted to download the first one. Then you can do that and temporarily stop downloading the rest. But while exiting you have to check whether the rest are downloaded or not. And if they are not, then it has to be done, with the data simply thrown in the trash, and then the camera's memory pointer reset before the camera is released. How, one might ask, did anyone produce something so primitive? Well, it is done. Perhaps the money saved thereby was at least in part devoted to producing better optics for the camera. At least, one can hope so. But people did produce those cameras, and people have bought them. But does anyone want to reproduce the code to support this kind of crap in the kernel? And go through all of the hoops required in order to fake the behavior which one woulld "expect" from a "real" still camera? It has already been done in camlibs/jl2005c and isn't that enough?
Theodore Kilgore
Hi,
<snip>
OK, another example. The cameras supported in camlibs/jl2005c do not have webcam ability, but someone could at any time design and market a dualmode which has in stillcam mode the same severe limitation. What limitation? Well, the entire memory of the camera must be dumped, or else the camera jams itself. You can stop dumping in the middle of the operation, but you must continue after that. Suppose that you had ten pictures on the camera and you only wanted to download the first one. Then you can do that and temporarily stop downloading the rest. But while exiting you have to check whether the rest are downloaded or not. And if they are not, then it has to be done, with the data simply thrown in the trash, and then the camera's memory pointer reset before the camera is released. How, one might ask, did anyone produce something so primitive? Well, it is done. Perhaps the money saved thereby was at least in part devoted to producing better optics for the camera. At least, one can hope so. But people did produce those cameras, and people have bought them. But does anyone want to reproduce the code to support this kind of crap in the kernel? And go through all of the hoops required in order to fake the behavior which one woulld "expect" from a "real" still camera? It has already been done in camlibs/jl2005c and isn't that enough?
This actually is an example where doing a kernel driver would be easier, a kernel driver never exits. So it can simply remember where it was reading (and cache the data it has read sofar). If an app requests picture 10, we read 1-10, cache them and return picture 10 to the app, then the same or another app asks for picture 4, get it from cache, asks for picture 20 read 11-20, etc.
Having written code for various small digital picture frames (the keychain models) I know where you are coming from. Trust me I do. Recently I had an interesting bug report, with a corrupt PAT (picture allocation table) turns out that when deleting a picture through the menu inside the frame a different marker gets written to the PAT then when deleting it with the windows software, Fun huh?
So yeah duplicating this code is no fun, but it is the only realistic solution which will get us a 100% reliable and robust user experience.
Regards,
Hans
On Tue, 9 Aug 2011, Hans de Goede wrote:
Hi,
<snip>
OK, another example. The cameras supported in camlibs/jl2005c do not have webcam ability, but someone could at any time design and market a dualmode which has in stillcam mode the same severe limitation. What limitation? Well, the entire memory of the camera must be dumped, or else the camera jams itself. You can stop dumping in the middle of the operation, but you must continue after that. Suppose that you had ten pictures on the camera and you only wanted to download the first one. Then you can do that and temporarily stop downloading the rest. But while exiting you have to check whether the rest are downloaded or not. And if they are not, then it has to be done, with the data simply thrown in the trash, and then the camera's memory pointer reset before the camera is released. How, one might ask, did anyone produce something so primitive? Well, it is done. Perhaps the money saved thereby was at least in part devoted to producing better optics for the camera. At least, one can hope so. But people did produce those cameras, and people have bought them. But does anyone want to reproduce the code to support this kind of crap in the kernel? And go through all of the hoops required in order to fake the behavior which one woulld "expect" from a "real" still camera? It has already been done in camlibs/jl2005c and isn't that enough?
This actually is an example where doing a kernel driver would be easier, a kernel driver never exits. So it can simply remember where it was reading (and cache the data it has read sofar). If an app requests picture 10, we read 1-10, cache them and return picture 10 to the app, then the same or another app asks for picture 4, get it from cache, asks for picture 20 read 11-20, etc.
This, in fact, is the way that the OEM software for most of these cheap cameras works. The camera is dumped, and then raw files for the pictures are created in C:\TEMP. Then the raw files are all processed immediately into viewable pictures, after which thumbnails (which did not previously exist as separate entities) can be created for use in the GUI app. Then, if the user "chooses" to "save" certain of the photos, the "chosen" photos are merely copied to a more permanent location. And when the camera-accessing app is exited, the temporary files are all deleted.
Clearly, the OEM approach recommends itself for simplicity. Nevertheless, there is an obvious disadvantage. Namely, *all* of the raw data from the camera needs to be fetched and, as you say, "kept in cache." That "cache" is either going to use RAM, or is going to be based in swap. And not every piece of hardware is a big, honking system with plenty of gigabytes in the RAM slots, and moreover there exist systems with low memory where it is also considered not a good idea to use swap. Precisely because of these realities, the design of libgphoto2 has consciously rejected the approach used in the OEM drivers. Rather, it is a priority to lower the memory footprint by dealing with the data piece by piece. This means, essentially, handling the photos on the camera one at a time. It is worth considering that some of the aforementioned low-powered systems with low quantities of RAM on board, and with no allocated swap space are running Linux these days.
Having written code for various small digital picture frames (the keychain models) I know where you are coming from. Trust me I do.
Not to worry. I know where you are coming from, too. Trust me I do.
Recently I had
an interesting bug report, with a corrupt PAT (picture allocation table) turns out that when deleting a picture through the menu inside the frame a different marker gets written to the PAT then when deleting it with the windows software, Fun huh?
Yes, of course it is fun. We should not have signed up to do this kind of work if we can't take a joke, right?
But, more seriously, there may be some reason why that different character is used -- or OTOH maybe not, and somebody was just being silly. Unfortunately, experience tells me it is probably necessary to figure out which of the two possibilities it is.
Theodore Kilgore
(second reply to Adam's message)
On Sun, 7 Aug 2011, Adam Baker wrote:
On Friday 05 August 2011, Hans de Goede wrote:
This sounds to be a good theme for the Workshop, or even to KS/2011.
Agreed, although we don't need to talk about this for very long, the solution is basically:
Define a still image retrieval API for v4l2 devices (there is only 1 interface for both functions on these devices, so only 1 driver, and to me it makes sense to extend the existing drivers to also do still image retrieval).
Modify existing kernel v4l2 drivers to provide this API
Write a new libgphoto driver which talks this interface (only need to do one driver since all dual mode cams will export the same API).
is something to discuss at the workshop.
This approach sounds fine as long as you can come up with a definition for the API that covers the existing needs and is extensible when new cameras come along and doesn't create horrible inefficiencies by not matching the way some cameras work. I've only got one example of such a camera and it is a fairly basic one but things I can imagine the API needing to provide are
This reply deals exclusively with an analysis of the following list of abilities. Briefly, the situation is more complicated than one might expect. Thhe detailed answers below are provided so that people can be fully aware of the complexity of the situation, on the grounds that such things should be more generally known before plans are made, rather than after.
For my analysis of whether it is appropriate or not to do such things as are on this list inside the kernel, please look at my previous reply.
- Report number of images on device
Mercifully, all dual-mode cameras I know of will do this. A stillcam which would not report this would be real trouble, so it is reasonable to expect this to work.
- Select an image to read (for some cameras selecting next may be much more
efficient than selecting at random although whether that inefficiency occurs when selecting, when reading image info or when reading image data may vary)
Briefly, some cameras will not let one select at random, at all. One has to read all previous data and discard it.
- Read image information for selected image (resolution, compression type,
FOURCC)
This kind of information may be contained in the image data itself. In the alternative, it may be contained elsewhere, such as in an allocation table. It could also be collected, image for image, as responses to a sequence of queries. I have seen all of these.
- Read raw image data for selected image
This might require reading the data for all previous images, or might not.
- Delete individual image (not supported by all cameras)
Indeed.
- Delete all images (sometimes supported on cameras that don't support
individual delete)
Yes, sometimes. And sometimes not. And sometimes it depends which firmware version it is, too.
I'm not sure if any of these cameras support tethered capture but if they do
Yes, they all do, in a sense. They will all take a picture and send the image down to the computer, which is one kind of tethered capture. AFAIK none of them will take a picture and store it on the camera, a second kind of tethered capture. Those cameras which use bulk transport for all data transfer have this feature supported in libgphoto2. Those which use isochronous transport when running in webcam mode have to take tethered pictures by way of the webcam functionality.
then add Take photo Set resolution
I doubt if any of them support EXIF data,
No, they don't
thumbnail images,
No
the ability to upload images to the camera
No
or any sound recording
No, with one known exception. One of the mr97310a cameras has a microphone on it and can be used to record sound. AFAICT it cannot be used this way and also take pictures at the same time. There is a little toggle switch on the camera which has to be pushed either toward "audio" setting or toward "video" setting. Downloading of audio (wav) files is therefore supported in libgphoto2/camlibs/mars.
but if they do then those
are additional things that gphoto2 would want to be able to do.
Yes. And, now, as I said in the previous message, it is far better just to figure out a way to let gphoto2 to access the camera in peace when legitimately summoned to do so, and not to mess with re-creating all of these perplexing variations on camera abilities in various camera drivers in the kernel.
Theodore Kilgore
Hi,
On 08/08/2011 12:53 AM, Adam Baker wrote:
On Friday 05 August 2011, Hans de Goede wrote:
This sounds to be a good theme for the Workshop, or even to KS/2011.
Agreed, although we don't need to talk about this for very long, the solution is basically:
Define a still image retrieval API for v4l2 devices (there is only 1 interface for both functions on these devices, so only 1 driver, and to me it makes sense to extend the existing drivers to also do still image retrieval).
Modify existing kernel v4l2 drivers to provide this API
Write a new libgphoto driver which talks this interface (only need to do one driver since all dual mode cams will export the same API).
is something to discuss at the workshop.
This approach sounds fine as long as you can come up with a definition for the API that covers the existing needs and is extensible when new cameras come along and doesn't create horrible inefficiencies by not matching the way some cameras work. I've only got one example of such a camera and it is a fairly basic one but things I can imagine the API needing to provide are
- Report number of images on device
Make that report highest picture number present call. We want to provide consistent numbers for pictures even if some are deleted, renumbering them on the fly when a picture gets deleted is no good, esp. since multiple apps may be using the device at the same time. So we may have a hole in out numbering, hence my initial proposal of having the following API:
int get_max_picture_nr() int is_picture_present(int nr) int get_picture(int nr) int delete_picture(int nr) int delete_all()
- Select an image to read (for some cameras selecting next may be much more
efficient than selecting at random although whether that inefficiency occurs when selecting, when reading image info or when reading image data may vary) 3) Read image information for selected image (resolution, compression type, FOURCC)
I have not yet thought about meta-data. But I agree we will need some metadata to convey things like the format of the picture data returned by get_picture (this will be raw data any conversion / post processing will be done in userspace).
- Read raw image data for selected image
- Delete individual image (not supported by all cameras)
- Delete all images (sometimes supported on cameras that don't support
individual delete)
I'm not sure if any of these cameras support tethered capture but if they do then add Take photo Set resolution
That is what the webcam mode is for :)
I doubt if any of them support EXIF data, thumbnail images, the ability to upload images to the camera or any sound recording but if they do then those are additional things that gphoto2 would want to be able to do.
sound recordings can be handled like pictures but with a different FOURCC code (conveying the contents is audio stored in fmt foo).
Regards,
Hans
Hi,
On 08/03/2011 10:36 PM, Mauro Carvalho Chehab wrote:
Em 03-08-2011 16:53, Theodore Kilgore escreveu:
<snip snip>
Mauro,
Not saying that you need to change the program for this session to deal with this topic, but an old and vexing problem is dual-mode devices. It is an issue which needs some kind of unified approach, and, in my opinion, consensus about policy and methodology.
As a very good example if this problem, several of the cameras that I have supported as GSPCA devices in their webcam modality are also still cameras and are supported, as still cameras, in Gphoto. This can cause a collision between driver software in userspace which functions with libusb, and on the other hand with a kernel driver which tries to grab the device.
Recent attempts to deal with this problem involve the incorporation of code in libusb which disables a kernel module that has already grabbed the device, allowing the userspace driver to function. This has made life a little bit easier for some people, but not for everybody. For, the device needs to be re-plugged in order to re-activate the kernel support. But some of the "user-friencly" desktop setups used by some distros will automatically start up a dual-mode camera with a gphoto-based program, thereby making it impossible for the camera to be used as a webcam unless the user goes for a crash course in how to disable the "feature" which has been so thoughtfully (thoughtlessly?) provided.
As the problem is not confined to cameras but also affects some other devices, such as DSL modems which have a partition on them and are thus seen as Mass Storage devices, perhaps it is time to try to find a systematic approach to problems like this.
There are of course several possible approaches.
- A kernel module should handle everything related to connecting up the
hardware. In that case, the existing userspace driver has to be modified to use the kernel module instead of libusb. Those who support this option would say that it gets everything under the control of the kernel, where it belongs. OTOG, the possible result is to create a minor mess in projects like Gphoto.
- The kernel module should be abolished, and all of its functionality
moved to userspace. This would of course involve difficulties approximately equivalent to item 1. An advantage, in the eyes of some, would be to cut down on the yet-another-driver-for-yet-another-piece-of-peculiar-hardware syndrome which obviously contributes to an in principle unlimited increase in the size of the kernel codebase. A disadvantage would be that it would create some disruption in webcam support.
- A further modification to libusb reactivates the kernel module
automatically, as soon as the userspace app which wanted to access the device through a libusb-based driver library is closed. This seems attractive, but it has certain deficiencies as well. One of them is that it can not necessarily provide a smooth and informative user experience, since circumstances can occur in which something appears to go wrong, but the user gets no clear message saying what the problem is. In other words, it is a patchwork solution which only slightly refines the current patchwork solution in libusb, which is in itself only a slight improvement on the original, unaddressed problem.
- ???
Several people are interested in this problem, but not much progress has been made at this time. I think that the topic ought to be put somehow on the front burner so that lots of people will try to think of the best way to handle it. Many eyes, and all that.
Not saying change your schedule, as I said. Have a nice conference. I wish I could attend. But I do hope by this message to raise some general concern about this problem.
That's an interesting issue.
A solution like (3) is a little bit out of scope, as it is a pure userspace (or a mixed userspace USB stack) solution.
Technically speaking, letting the same device being handled by either an userspace or a kernelspace driver doesn't seem smart to me, due to:
- Duplicated efforts to maintain both drivers;
- It is hard to sync a kernel driver with an userspace driver,
as you've pointed.
So, we're between (1) or (2).
Moving the solution entirely to userspace will have, additionally, the problem of having two applications trying to access the same hardware using two different userspace instances (for example, an incoming videoconf call while Gphoto is opened, assuming that such videoconf call would also have an userspace driver).
IMO, the right solution is to work on a proper snapshot mode, in kernelspace, and moving the drivers that have already a kernelspace out of Gphoto.
I agree that solution 1) so all the driver bits in kernelspace is the right solution. This is unrelated to snapshot mode though, snapshot mode is all about taking live snapshots. Where as in this case we are downloading pictures which have already been taken (perhaps days ago) from device memory.
What we need for this is a simple API (new v4l ioctl's I guess) for the stillcam mode of these dual mode cameras (stillcam + webcam). So that the webcam drivers can grow code to also allow access to the stored pictures, which were taken in standalone (iow not connected to usb) stillcam mode.
This API does not need to be terribly complex. AFAIK all of the currently supported dual cam cameras don't have filenames only picture numbers, so the API could consist of a simple, get highest picture nr, is picture X present (some slots may contain deleted pictures), get picture X, delete picture X, delete all API.
If others are willing to help flesh out an API for this, I can write a proposal and submit it a few weeks before the Media Subsystem Workshop starts.
Regards,
Hans
Em 04-08-2011 08:39, Hans de Goede escreveu:
Hi,
On 08/03/2011 10:36 PM, Mauro Carvalho Chehab wrote:
Em 03-08-2011 16:53, Theodore Kilgore escreveu:
<snip snip>
Mauro,
Not saying that you need to change the program for this session to deal with this topic, but an old and vexing problem is dual-mode devices. It is an issue which needs some kind of unified approach, and, in my opinion, consensus about policy and methodology.
As a very good example if this problem, several of the cameras that I have supported as GSPCA devices in their webcam modality are also still cameras and are supported, as still cameras, in Gphoto. This can cause a collision between driver software in userspace which functions with libusb, and on the other hand with a kernel driver which tries to grab the device.
Recent attempts to deal with this problem involve the incorporation of code in libusb which disables a kernel module that has already grabbed the device, allowing the userspace driver to function. This has made life a little bit easier for some people, but not for everybody. For, the device needs to be re-plugged in order to re-activate the kernel support. But some of the "user-friencly" desktop setups used by some distros will automatically start up a dual-mode camera with a gphoto-based program, thereby making it impossible for the camera to be used as a webcam unless the user goes for a crash course in how to disable the "feature" which has been so thoughtfully (thoughtlessly?) provided.
As the problem is not confined to cameras but also affects some other devices, such as DSL modems which have a partition on them and are thus seen as Mass Storage devices, perhaps it is time to try to find a systematic approach to problems like this.
There are of course several possible approaches.
- A kernel module should handle everything related to connecting up the
hardware. In that case, the existing userspace driver has to be modified to use the kernel module instead of libusb. Those who support this option would say that it gets everything under the control of the kernel, where it belongs. OTOG, the possible result is to create a minor mess in projects like Gphoto.
- The kernel module should be abolished, and all of its functionality
moved to userspace. This would of course involve difficulties approximately equivalent to item 1. An advantage, in the eyes of some, would be to cut down on the yet-another-driver-for-yet-another-piece-of-peculiar-hardware syndrome which obviously contributes to an in principle unlimited increase in the size of the kernel codebase. A disadvantage would be that it would create some disruption in webcam support.
- A further modification to libusb reactivates the kernel module
automatically, as soon as the userspace app which wanted to access the device through a libusb-based driver library is closed. This seems attractive, but it has certain deficiencies as well. One of them is that it can not necessarily provide a smooth and informative user experience, since circumstances can occur in which something appears to go wrong, but the user gets no clear message saying what the problem is. In other words, it is a patchwork solution which only slightly refines the current patchwork solution in libusb, which is in itself only a slight improvement on the original, unaddressed problem.
- ???
Several people are interested in this problem, but not much progress has been made at this time. I think that the topic ought to be put somehow on the front burner so that lots of people will try to think of the best way to handle it. Many eyes, and all that.
Not saying change your schedule, as I said. Have a nice conference. I wish I could attend. But I do hope by this message to raise some general concern about this problem.
That's an interesting issue.
A solution like (3) is a little bit out of scope, as it is a pure userspace (or a mixed userspace USB stack) solution.
Technically speaking, letting the same device being handled by either an userspace or a kernelspace driver doesn't seem smart to me, due to: - Duplicated efforts to maintain both drivers; - It is hard to sync a kernel driver with an userspace driver, as you've pointed.
So, we're between (1) or (2).
Moving the solution entirely to userspace will have, additionally, the problem of having two applications trying to access the same hardware using two different userspace instances (for example, an incoming videoconf call while Gphoto is opened, assuming that such videoconf call would also have an userspace driver).
IMO, the right solution is to work on a proper snapshot mode, in kernelspace, and moving the drivers that have already a kernelspace out of Gphoto.
I agree that solution 1) so all the driver bits in kernelspace is the right solution. This is unrelated to snapshot mode though, snapshot mode is all about taking live snapshots. Where as in this case we are downloading pictures which have already been taken (perhaps days ago) from device memory.
What we need for this is a simple API (new v4l ioctl's I guess) for the stillcam mode of these dual mode cameras (stillcam + webcam). So that the webcam drivers can grow code to also allow access to the stored pictures, which were taken in standalone (iow not connected to usb) stillcam mode.
This API does not need to be terribly complex. AFAIK all of the currently supported dual cam cameras don't have filenames only picture numbers, so the API could consist of a simple, get highest picture nr, is picture X present (some slots may contain deleted pictures), get picture X, delete picture X, delete all API.
That sounds to work. I would map it on a way close to the controls API (or like the DVB FE_[GET|SET]_PROPERTY API), as this would make easier to expand it in the future, if we start to see webcams with file names or other things like that.
If others are willing to help flesh out an API for this, I can write a proposal and submit it a few weeks before the Media Subsystem Workshop starts.
Regards,
Hans
On Thu, 4 Aug 2011, Mauro Carvalho Chehab wrote:
Em 04-08-2011 08:39, Hans de Goede escreveu:
Hi,
On 08/03/2011 10:36 PM, Mauro Carvalho Chehab wrote:
Em 03-08-2011 16:53, Theodore Kilgore escreveu:
<snip snip>
Mauro,
Not saying that you need to change the program for this session to deal with this topic, but an old and vexing problem is dual-mode devices. It is an issue which needs some kind of unified approach, and, in my opinion, consensus about policy and methodology.
As a very good example if this problem, several of the cameras that I have supported as GSPCA devices in their webcam modality are also still cameras and are supported, as still cameras, in Gphoto. This can cause a collision between driver software in userspace which functions with libusb, and on the other hand with a kernel driver which tries to grab the device.
Recent attempts to deal with this problem involve the incorporation of code in libusb which disables a kernel module that has already grabbed the device, allowing the userspace driver to function. This has made life a little bit easier for some people, but not for everybody. For, the device needs to be re-plugged in order to re-activate the kernel support. But some of the "user-friencly" desktop setups used by some distros will automatically start up a dual-mode camera with a gphoto-based program, thereby making it impossible for the camera to be used as a webcam unless the user goes for a crash course in how to disable the "feature" which has been so thoughtfully (thoughtlessly?) provided.
As the problem is not confined to cameras but also affects some other devices, such as DSL modems which have a partition on them and are thus seen as Mass Storage devices, perhaps it is time to try to find a systematic approach to problems like this.
There are of course several possible approaches.
- A kernel module should handle everything related to connecting up the
hardware. In that case, the existing userspace driver has to be modified to use the kernel module instead of libusb. Those who support this option would say that it gets everything under the control of the kernel, where it belongs. OTOG, the possible result is to create a minor mess in projects like Gphoto.
- The kernel module should be abolished, and all of its functionality
moved to userspace. This would of course involve difficulties approximately equivalent to item 1. An advantage, in the eyes of some, would be to cut down on the yet-another-driver-for-yet-another-piece-of-peculiar-hardware syndrome which obviously contributes to an in principle unlimited increase in the size of the kernel codebase. A disadvantage would be that it would create some disruption in webcam support.
- A further modification to libusb reactivates the kernel module
automatically, as soon as the userspace app which wanted to access the device through a libusb-based driver library is closed. This seems attractive, but it has certain deficiencies as well. One of them is that it can not necessarily provide a smooth and informative user experience, since circumstances can occur in which something appears to go wrong, but the user gets no clear message saying what the problem is. In other words, it is a patchwork solution which only slightly refines the current patchwork solution in libusb, which is in itself only a slight improvement on the original, unaddressed problem.
- ???
Several people are interested in this problem, but not much progress has been made at this time. I think that the topic ought to be put somehow on the front burner so that lots of people will try to think of the best way to handle it. Many eyes, and all that.
Not saying change your schedule, as I said. Have a nice conference. I wish I could attend. But I do hope by this message to raise some general concern about this problem.
That's an interesting issue.
A solution like (3) is a little bit out of scope, as it is a pure userspace (or a mixed userspace USB stack) solution.
Technically speaking, letting the same device being handled by either an userspace or a kernelspace driver doesn't seem smart to me, due to: - Duplicated efforts to maintain both drivers; - It is hard to sync a kernel driver with an userspace driver, as you've pointed.
So, we're between (1) or (2).
Moving the solution entirely to userspace will have, additionally, the problem of having two applications trying to access the same hardware using two different userspace instances (for example, an incoming videoconf call while Gphoto is opened, assuming that such videoconf call would also have an userspace driver).
IMO, the right solution is to work on a proper snapshot mode, in kernelspace, and moving the drivers that have already a kernelspace out of Gphoto.
I agree that solution 1) so all the driver bits in kernelspace is the right solution. This is unrelated to snapshot mode though, snapshot mode is all about taking live snapshots. Where as in this case we are downloading pictures which have already been taken (perhaps days ago) from device memory.
What we need for this is a simple API (new v4l ioctl's I guess) for the stillcam mode of these dual mode cameras (stillcam + webcam). So that the webcam drivers can grow code to also allow access to the stored pictures, which were taken in standalone (iow not connected to usb) stillcam mode.
This API does not need to be terribly complex. AFAIK all of the currently supported dual cam cameras don't have filenames only picture numbers,
Trying to remember any actual exceptions to this statement. No, at the moment I can not. But better not to assume it could never happen.
so the API could consist of a simple, get highest picture nr, is picture X present (some slots may contain deleted pictures), get picture X, delete picture X, delete all API.
One needs to be really careful about setting up a general framework for that kind of thing. Some of these cameras can do truly amazing things, which I mean in a negative sense, not a positive sense. The sq905 cameras are an extreme example. The only way to select a photo to download is to download all previous photos and toss the data. The jl2005c cameras (which mercifully are not dual-mode cameras) are even worse. Those will only permit one to dump the entire memory of the camera. What I am saying is that weird behavior of cameras designed with insane chipsets built with cost-cutting as the first priority must be anticipated.
That sounds to work. I would map it on a way close to the controls API (or like the DVB FE_[GET|SET]_PROPERTY API), as this would make easier to expand it in the future, if we start to see webcams with file names or other things like that.
If others are willing to help flesh out an API for this, I can write a proposal and submit it a few weeks before the Media Subsystem Workshop starts.
Theodore Kilgore
Hi Mauro,
On Wed, Aug 03, 2011 at 02:21:05PM -0300, Mauro Carvalho Chehab wrote:
As already announced, we're continuing the planning for this year's media subsystem workshop.
To avoid overriding the main ML with workshop-specifics, a new ML was created: workshop-2011@linuxtv.org
I'll also be updating the event page at: http://www.linuxtv.org/events.php
Over the one-year period, we had 242 developers contributing to the subsystem. Thank you all for that! Unfortunately, the space there is limited, and we can't affort to have all developers there.
Due to that some criteria needed to be applied to create a short list of people that were invited today to participate.
The main criteria were to select the developers that did significant contributions for the media subsystem over the last 1 year period, measured in terms of number of commits and changed lines to the kernel drivers/media tree.
As the used criteria were the number of kernel patches, userspace-only developers weren't included on the invitations. It would be great to have there open source application developers as well, in order to allow us to tune what's needed from applications point of view.
So, if you're leading the development of some V4L and/or DVB open-source application and wants to be there, or you think you can give good contributions for helping to improve the subsystem, please feel free to send us an email.
With regards to the themes, we're received, up to now, the following proposals:
---------------------------------------------------------+---------------------- THEME | Proposed-by: ---------------------------------------------------------+---------------------- Buffer management: snapshot mode | Guennadi Rotation in webcams in tablets while streaming is active | Hans de Goede V4L2 Spec ??? ambiguities fix | Hans Verkuil V4L2 compliance test results | Hans Verkuil Media Controller presentation (probably for Wed, 25) | Laurent Pinchart Workshop summary presentation on Wed, 25 | Mauro Carvalho Chehab ---------------------------------------------------------+----------------------
From my side, I also have the following proposals:
- DVB API consistency - what to do with the audio and video DVB API's
that conflict with V4L2 and (somewhat) with ALSA?
- Multi FE support - How should we handle a frontend with multiple
delivery systems like DRX-K frontend?
videobuf2 - migration plans for legacy drivers
NEC IR decoding - how should we handle 32, 24, and 16 bit protocol
variations?
Even if you won't be there, please feel free to propose themes for discussion, in order to help us to improve even more the subsystem.
Drawing from our recent discussions over e-mail, I would like to add another topic: the V4L2 on desktop vs. embedded systems.
The V4L2 is being used as an application interface on desktop systems, but recently as support has been added to complex camera ISPs in embedded systems it is used for a different purpose: it's a much lower level interface for specialised user space which typically contains a middleware layer which provides its own application interface (e.g. GSTphotography). The V4L2 API in the two different kind of systems is exactly the same but its role is different: the hardware drivers are not up to offering an interface suitable for the use by general purpose applications.
To run generic purpose applications on such embedded systems, I have promoted the use of libv4l (either plain or with plugins) to provide what is missing from between the V4L2, Media controller and v4l2_subdev interfaces provided by kernel drivers --- which mostly allow controlling the hardware --- and what the general purpose applications need. Much of the missing functionality is usually implemented in algorithm frameworks and libraries that do not fit to kernel space: they are complex and often the algorithms themselves are under very restrictive licenses. There is an upside: the libv4l does contain an automatic exposure and a white balance algorithm which are suitable for some use cases.
Defining functionality suitable for general purpose applications at the level of V4L2 requires scores of policy decisions on embedded systems. One of the examples is the pipeline configuration for which the Media controller and v4l2_subdev interfaces are currently being used for. Applications such as Fcam URL:http://fcam.garage.maemo.org/ do need to make these policy decisions by themselves. For this reason, I consider it highly important that the low level hardware control interface is available to the user space applications.
I think it is essential for the future support of such embedded devices in the mainline kernel to come to a common agreement on how this kind of systems should be implemented in a way that everyone's requirements are best taken into account. I believe this is in everyone's interest.
Kind regards,
Em 11-08-2011 07:16, Sakari Ailus escreveu:
Hi Mauro,
On Wed, Aug 03, 2011 at 02:21:05PM -0300, Mauro Carvalho Chehab wrote:
As already announced, we're continuing the planning for this year's media subsystem workshop.
To avoid overriding the main ML with workshop-specifics, a new ML was created: workshop-2011@linuxtv.org
I'll also be updating the event page at: http://www.linuxtv.org/events.php
Over the one-year period, we had 242 developers contributing to the subsystem. Thank you all for that! Unfortunately, the space there is limited, and we can't affort to have all developers there.
Due to that some criteria needed to be applied to create a short list of people that were invited today to participate.
The main criteria were to select the developers that did significant contributions for the media subsystem over the last 1 year period, measured in terms of number of commits and changed lines to the kernel drivers/media tree.
As the used criteria were the number of kernel patches, userspace-only developers weren't included on the invitations. It would be great to have there open source application developers as well, in order to allow us to tune what's needed from applications point of view.
So, if you're leading the development of some V4L and/or DVB open-source application and wants to be there, or you think you can give good contributions for helping to improve the subsystem, please feel free to send us an email.
With regards to the themes, we're received, up to now, the following proposals:
---------------------------------------------------------+---------------------- THEME | Proposed-by: ---------------------------------------------------------+---------------------- Buffer management: snapshot mode | Guennadi Rotation in webcams in tablets while streaming is active | Hans de Goede V4L2 Spec ??? ambiguities fix | Hans Verkuil V4L2 compliance test results | Hans Verkuil Media Controller presentation (probably for Wed, 25) | Laurent Pinchart Workshop summary presentation on Wed, 25 | Mauro Carvalho Chehab ---------------------------------------------------------+----------------------
From my side, I also have the following proposals:
- DVB API consistency - what to do with the audio and video DVB API's
that conflict with V4L2 and (somewhat) with ALSA?
- Multi FE support - How should we handle a frontend with multiple
delivery systems like DRX-K frontend?
videobuf2 - migration plans for legacy drivers
NEC IR decoding - how should we handle 32, 24, and 16 bit protocol
variations?
Even if you won't be there, please feel free to propose themes for discussion, in order to help us to improve even more the subsystem.
Drawing from our recent discussions over e-mail, I would like to add another topic: the V4L2 on desktop vs. embedded systems.
Topic added to: http://www.linuxtv.org/events.php
The V4L2 is being used as an application interface on desktop systems, but recently as support has been added to complex camera ISPs in embedded systems it is used for a different purpose: it's a much lower level interface for specialised user space which typically contains a middleware layer which provides its own application interface (e.g. GSTphotography). The V4L2 API in the two different kind of systems is exactly the same but its role is different: the hardware drivers are not up to offering an interface suitable for the use by general purpose applications.
To run generic purpose applications on such embedded systems, I have promoted the use of libv4l (either plain or with plugins) to provide what is missing from between the V4L2, Media controller and v4l2_subdev interfaces provided by kernel drivers --- which mostly allow controlling the hardware --- and what the general purpose applications need. Much of the missing functionality is usually implemented in algorithm frameworks and libraries that do not fit to kernel space: they are complex and often the algorithms themselves are under very restrictive licenses. There is an upside: the libv4l does contain an automatic exposure and a white balance algorithm which are suitable for some use cases.
Defining functionality suitable for general purpose applications at the level of V4L2 requires scores of policy decisions on embedded systems. One of the examples is the pipeline configuration for which the Media controller and v4l2_subdev interfaces are currently being used for. Applications such as Fcam URL:http://fcam.garage.maemo.org/ do need to make these policy decisions by themselves. For this reason, I consider it highly important that the low level hardware control interface is available to the user space applications.
I think it is essential for the future support of such embedded devices in the mainline kernel to come to a common agreement on how this kind of systems should be implemented in a way that everyone's requirements are best taken into account. I believe this is in everyone's interest.
Since we start moving with MC API, I was afraid that we'll end by needing to differentiate between a typical consumer hardware driver and a SoC specialized hardware for embedded systems.
I remember I mentioned that a few times either at the ML and/or on some face to face meetings.
That likely means that we'll need to create some profiles at the V4L2 spec, meant to cover what API's should be implemented by each device type, and how libv4l should be used.
While this is not written anywhere, except at the source code, but currently, we have several profiles already, adopted by most of the drivers.
On very short terms, we have:
1) Radio devices: Simplest API: don't implement any streaming API nor any video-specific ioctl. 2) TV grabber: radio profile + streaming API + video ioctl's + ALSA API; 3) Webcams: TV grabber profile + libv4l for proprietary FOURCC formats; 4) TV tuners: TV grabber profile + tuner ioctl's + Remote Controller API; 5) Embedded cameras: Webcams + MC API.
A few devices don't fit on the above, as they use a few more things. For example, pvrusb2 uses streaming ioctl's for radio, as it provides a MPEG streaming with the audio channels.
In other words, I think that we should add a table at the V4L2 spec, mapping each possible ioctl for the available API's to each possible device type.
Something like:
----------------+---------------+---------------+---------------+---------------+---------------- IOCTL/API | RADIO | TV GRABBER | WEB CAM | TV TUNER | EMBEDDED CAMERA ----------------+---------------+---------------+---------------+---------------+---------------- VIDIOC_QUERYCAP | Mandatory | Mandatory | Mandatory | Mandatory | Mandatory VIDIOC_G_TUNER | Mandatory | No | No | Mandatory | No ALSA API | Optional | Optional | Optional | Optional | Optional ... ----------------+---------------+---------------+---------------+---------------+----------------
As unimplemented ioctl's will return -ENOTTY since kernel 3.1, it will be easier for applications to detect the device type based on that and to work accordingly.
A similar table will likely be needed, in order to map what controls are recommended for each device type.
Maybe the spec should also give a hint about where certain controls should be implemented: at the sensor, at the bridge/DSP block or software-emulated in libv4l, when the hardware doesn't have direct support for it.
Regards, Mauro