-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
First draft of 0002-capture-matching-next-steps #2
base: main
Are you sure you want to change the base?
First draft of 0002-capture-matching-next-steps #2
Conversation
* Good, because some preliminary research has already been done (@ZavenArra) that shows potential for this solution to work. | ||
* Bad, because this feature requires advanced skills in GIS analysis and algorithm development. | ||
|
||
### 3. Create experimental manual capture matching UI that offers manual spatial analysis relevant to disambiguating matching captures. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this might be the best bet because I think it would be easier to train admin users as opposed to making the application more complex. Other options seem to require a lot more 'resources'.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Plus over time the data collected could likely be used to assess some of the automated solutions, which seems to me is better than committing to automation a priori and using biased or fabricated datasets for it. However, I'd imagine the development of this UI would be pretty resource intensive.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that this option feels like it has a lot of potential for improvements in the short term. I think we could prototype a UI fairly quickly to collect feedback on its utility and simplicity for users. If we do go down the automated route, we may still want a UI to show the correlation and help spot errors.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps the simplest addition to the Capture Matching UI is to provide the user with a visual guide to camera orientation (compass rose and dip-angle gauge) for each photo in a comparison pair (but no orientation is enforced). This would require storing orientation during the capture, for phones which have this capacity (see solution 1, above).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@camwebb The implementation of your suggestion here is mailing option 1, enforcing orientations in the camera - or more specifically compass heading. Are you voting for compass heading enforcement in the image capture process as the low hanging fruit?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ZavenArra No, I'm not suggesting enforcing anything. Simply giving the Admin panel user easy to assimilate info on how the photo was taken.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, so simply reporting the compass bearing and smartphone rotation matrix to the admin panel user. We have facilities to collect this information, so I think this is a great suggestion on how to start making use of it. There are a few other parameters we collect, such as step count since last capture, that could also be reported.
4. Research approaches to image analysis that can produce a similarity metric to improve selection of candidates for location based matching of captures. | ||
5. Attempt to train models using machine learning based on existing database of capture images to identify repeat captures. | ||
6. Incorporate assigned species information during capture matching. | ||
7. Create curated training sets of matched trees for to support training of machine learning based models. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Typo: "for to"
* Good, because the images captured will have more constitant parameters. | ||
* Good, because this only affect the mobile application and does not require data research or algorithm development. | ||
* Bad, because the mobile application will become more complex for our low technical literacy users, and we will have to invest careful UX/UI development time to make these features easily usable. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bad, because some trees may be awkward or impossible to photograph from specific positions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bad, because any increase in tracking time per tree will negatively impact our target users.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Davidezrajay Can you expand here on your thoughts on minor increases in tracking time per tree vs absolute feasibility of capture matching? Which one takes higher priority?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They are both critical. It is a fine line between time and usable data.
Time: If people/orgs won't use the system because it takes too long/is too expensive to track, there will be no data.
Data: If we collect bad/inaccurate data, it will be worth less or worthless.
Prioritize tracking time as it is extremely critical for adoption on the ground. I recommend picking a usable threshold of an average of 5 seconds per capture. And working around that to get the most accurate data possible.
History - The fastest the app has been has been about 3 seconds, with an average gps accuracy of 10m and an excessive amount of worthless data. The original app had a setting_id
that allowed us to change the accuracy requirement based on the organizations need. Users could set the required accuracy from something like 2 meters to like 1km.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Understood, but unfortunately I have to strongly disagree.
We studied the GPS error on a suite of phones, and found that unless we wait for GPS convergence, the location data is garbage, bad enough to make repeat monitoring (aka capture matching) impossible. It is imperative that we wait for a solid GPS convergence. If we do not have GPS convergence, none of the approaches discussed in this ADR that use GPS are possible - this is what we found by doing grid based studies and visual analysis.
On good phones, convergence will happen faster, on poor phones it can take more time or lead to a timeout condition. If a planting organization wants a fast and accurate experience, they do have the option to invest in higher quality phones.
Currently convergence timeout is set to 20 seconds here, we could explore criteria for adjusting this default. https://github.com/Greenstand/treetracker-android/blob/4716e6b575f099e7d68efa666c184f605d12e75b/app/src/main/java/org/greenstand/android/TreeTracker/models/Configuration.kt#L36
We could also study the GPS convergence criteria, and see if there any room for fine tuning this for faster reported good convergence.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is true, we have gotten some complaints about the old app being "better" because it was faster.
I agree with Zaven focusing on the best possible convergence AND we need to assure the following
- The longer waiting time needs to be clearly communicated through the app. At least the organizations need to be totally aware of added waiting time. IE good place to start publishing this, is the Playstore which currently does not have that clause.(i just opened a ticket on this Playstore blurb to explain added waiting time on convergence treetracker-android#982)
- The waiting time has to have some kind of progress that is visible (i believe Visualize convergence while capturing treetracker-android#970 is going to address this) - urgency just increased here ...
- Possibly have a switch with the option of "waiting for convergence" vs. "fast and useless date"
It is true that without the capture matching tool the convergence is secondary but the biggest fundamental change in the system during the domain model migration is around the tree and the recaptures so matching them has become priority and without accuracy this core functionality will only get further out of reach.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As per @ZavenArra' suggestion, I've opened this issue to look at optimizing the convergence. Greenstand/Greenstand-Overview#125
We are currently getting feedback on current tracking times being too long.
* Good, because this only affect the mobile application and does not require data research or algorithm development. | ||
* Bad, because the mobile application will become more complex for our low technical literacy users, and we will have to invest careful UX/UI development time to make these features easily usable. | ||
|
||
### 2. Research feasibility of leveraging error autocorrelation between GPS measurements to improve selection of candidates for location based matching of captures. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This option seems to be the right path to go down, if we can get the dedicated resource required.
If we have sufficient markers in the data such as this, we may be able to calculate a "match confidence" score and present it to the user.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have done some preliminary analysis work on this, I have some math that takes into account the 'travel vector' or a grower as they move between trees.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Initial analysis and algorithm work for the travel vector approach can be found here: https://github.com/Greenstand/treetracker-location-analysis/blob/main/travel-vector/spatial.ipynb
|
||
## Pros and Cons of the Options | ||
|
||
### 1. Implement mobile application side image collection improvements, including enforced phone rotation, enforced compass bearing, and perhaps others. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we currently ask the mobile user if it's a first capture or a recapture, or do we only collect this information where orgs make use of the notes field?
I'm wondering if there are any more cues we can collect at the point of capture, without making the app more difficult to use.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We do not currently ask if it is the first capture or not. One challenge our organizational users are having is that multi growers are claiming the same tree as the "first" capture.
The original app had an "update" or "new tree" button. The update button gave the user the option of which tree they were updating based on GPS. The function worked well in small scale (10 or 20 trees) but any more than that and it just crashed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My view is that we need to be able to verify user entered claims related to the tree update/species/date/claims etc
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When I reviewed this version of the app 4 years ago, I determined that the crashing issue was not the major obstacel to the implementation. The main issues were:
- 'update tree' cannot obtain a geofix accurate enough to determine which tree is being updated (the origin of the capture matching problem)
- users do not necessarily know if they are updating or a tracking a new tree
- downloading relevant data for a tree update process to the application requires data connectivity in the field, and appropriate bandwith.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(First attempt at ADR participation - hope I got it right!)
* Good, because some preliminary research has already been done (@ZavenArra) that shows potential for this solution to work. | ||
* Bad, because this feature requires advanced skills in GIS analysis and algorithm development. | ||
|
||
### 3. Create experimental manual capture matching UI that offers manual spatial analysis relevant to disambiguating matching captures. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps the simplest addition to the Capture Matching UI is to provide the user with a visual guide to camera orientation (compass rose and dip-angle gauge) for each photo in a comparison pair (but no orientation is enforced). This would require storing orientation during the capture, for phones which have this capacity (see solution 1, above).
added point 8. Did not manage to fork it before. Got a 404 from Github and really still need some training on this process.
Co-authored-by: Cam Webb <cw@camwebb.info>
Co-authored-by: Cam Webb <cw@camwebb.info>
No description provided.