forked from Bjalki/MonteCosmoLocalization
-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy pathindex.html
481 lines (453 loc) · 22.6 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>CS 371 MCL CozmoBot</title>
</head>
<body><h1 align="center" id="top-page">Anki Cozmo Kidnapping using Monte Carlo Localization</h1>
<p align="center">
Nicholas Stach, Quan Nguyen, Brayton Alkinburgh, Douglas Harsha
<br>
<i>Spring 2023</i>
<br>
Src code ( <a href="https://github.com/QuanHNguyen232/MonteCosmoLocalization" target="_self">github</a>, <a href="https://www.youtube.com/watch?v=xvFZjo5PgG0" target="_self">zip</a><sup><a id="footnote-zipfile-ref" href="#footnote-zipfile">[1]</a></sup>)
<br>
<p align="center"><img src="readme-assets/cozmo-robot.jpg" width="" height="100"/></p>
</p>
<p>Goal:</p>
<ul>
<li>Implement MCL with Anki Cozmo</li>
<li>Create Panorama of collected images from which the Cozmo bot can use to find its position and relocate itself.</li>
<li>Create means of displaying belief probabilities.</li>
<li>Docs: <a href="https://data.bit-bots.de/cozmo_sdk_doc/cozmosdk.anki.com/docs/api.html">Cozmo API</a></li>
<li>Helpful <a href="https://github.com/nheidloff/visual-recognition-for-cozmo-with-tensorflow/blob/master/1-take-pictures/take-pictures.py">link</a> for taking pictures with cozmo and saving them.</li>
</ul>
<hr>
<h2 id="table-of-contents">Table of Contents</h2>
<ol>
<li><a href="#setup">Setup</a></li>
<li><a href="#understanding-algorithm">Understanding algorithm</a></li>
<li><a href="#tasks">Tasks</a></li>
<li><a href="#files-and-dirs">Files and Dirs</a></li>
<li><a href="#approaches">Approaches</a></li>
<li><a href="#running">Running</a></li>
<li><a href="#working-log">Working log</a></li>
</ol>
<hr>
<h2 id="setup">Setup</h2>
<p>Following the <a href="http://cozmosdk.anki.com/docs/initial.html">installation guide from Cozmo</a></p>
<ul>
<li><p>Setup on computer + phone:</p>
<ul>
<li><details><summary>Computer Setup</summary>
<ol>
<li>Follow the <a href="http://cozmosdk.anki.com/docs/initial.html">Initial setup</a>:<ul>
<li>Install Anki Cozmo SDK</li>
<li>SDK Example Programs: download and extract to any directory</li>
</ul>
</li>
<li>Install <a href="http://cozmosdk.anki.com/docs/adb.html#adb">Android Debug Bridge</a>, follow that to <a href="http://cozmosdk.anki.com/docs/adb.html#final-install">Final installation</a>.</li>
<li>Setup environment (recommend using Anaconda): Python 3.8 + required packages in requirements.txt:<ul>
<li>Create environment:<pre><code class="language-bash">conda create -n envname python=3.8
</code></pre>
</li>
<li>Activate environment:<pre><code class="language-bash">conda activate envname
</code></pre>
</li>
<li>Install all required packages:<pre><code class="language-bash">pip install -r requirements.txt
</code></pre>
</li>
</ul>
</li>
</ol>
</details>
</li>
<li><details><summary>Mobile Phone (Android) Setup</summary>
<ol>
<li> Install official Cozmo app on phone (<a href="https://play.google.com/store/apps/details?id=com.digitaldreamlabs.cozmo2&hl=en_US&gl=US">Android</a>/<a href="https://apps.apple.com/us/app/cozmo/id1154282030">iOS</a>)</li>
<li> Enable USB debugging on your device. This step may vary based on your phone, so check the <a href="https://developer.android.com/studio/debug/dev-options">android instruction</a> if needed.</li>
<li>Those steps are required everytime working on project:
i. Connect USB cable from phone to computer<ol>
<li>Run <code>adb devices</code> on CommandPrompt on computer to check and authorize your phone. (unil it appears "device" instead of "unauthorized")</li>
<li> Open Cozmo app and turn Cozmo bot on </li>
<li> Connect to Cozmo bot’s wifi network.</li>
<li> Enable SDK on Cozmo app</li>
</ol>
</li>
</ol>
</li>
</ul>
</details>
<ul>
<li>After setting up on both machines, download the SDK Examples, open Cozmo app, connect with cozmo bot (ensure the robot does not fall off the desk) -> setting -> enable SDK. Then try running the examples.</li>
</ul>
</li>
<li><details><summary>Website Publishing</summary>
<ol>
<li><p>Generate html file:</p>
<ul>
<li>Find your Chrome's download folder in your local machine. It will be asked in your terminal.</li>
<li>Run the generator:<pre><code class="language-bash">python html-generator.py
</code></pre>
</li>
</ul>
</li>
<li><p>Upload html file and assets (if any) to <code>/public_html/</code> folder on lab machine at Gettysburg College.</p>
<pre><code class="language-bash">chmod -R 755 ~/public_html/folder_to_publish
chmod -R 755 .
</code></pre>
</li>
</ol>
</details></li>
</ul>
<p align="right"><a href="#top-page">[Back to top]</a></p>
<hr>
<h2 id="understanding-algorithm">Understanding algorithm</h2>
<p>In our code, there are comments that refer to table, etc. Those refer to the algorithms from the "<a href="http://robots.stanford.edu/probabilistic-robotics/">Probabilistic Robotics</a>" by Sebastian Thrun, Wolfram Burgard and Dieter Fox.</p>
<ul>
<li><details><summary>Pseudo code for MCL</summary>
<p align="center"><img src="readme-assets/mcl-algo.png" width="" height="400"/></p>
</details>
</li>
<li><details><summary>Table 5.2</summary>
<p align="center"><img src="readme-assets/mcl-table-5_2.png" width="" height="350"/></p>
</details>
</li>
<li><details><summary>Table 5.4</summary>
<p align="center"><img src="readme-assets/mcl-table-5_4.png" width="" height="350"/></p>
<p> The formula above, however, is a bit different from the one from Prof. Neller's lecture:</p>
<p align="center"><img src="readme-assets/mcl-table-5_4-prof-neller.png" width="" height="350"/></p></li>
</ul>
</details>
<p align="right"><a href="#top-page">[Back to top]</a></p>
<hr>
<h2 id="tasks">Tasks</h2>
<ul>
<li><input disabled="" type="checkbox"> Future:<ul>
<li>How to improve accuracy</li>
<li>How to make it works in dark environment (may use edge detection??)</li>
<li>How to localize with just 4 images (0, 90, 180, 270 degrees) (divide imgs into $n$ cols then compare w/ $n$ cols of each img for all $m$ ims?)</li>
</ul>
</li>
<li><input checked="" disabled="" type="checkbox"> Get robot to turn to location it has highest belief prob after MCL done</li>
<li><input checked="" disabled="" type="checkbox"> Polish borrowed code: make more succint, use methods we have already coded. </li>
<li><input checked="" disabled="" type="checkbox"> Read MCL slides + Java demo code (convert to python) (Bray & Doug)</li>
<li><input checked="" disabled="" type="checkbox"> Try running MCL w/ pano</li>
<li><input checked="" disabled="" type="checkbox"> How to rotate robot</li>
<li><input checked="" disabled="" type="checkbox"> How to get image from cozmo’s camera (file <code>picture-collection.py</code>)</li>
<li><input checked="" disabled="" type="checkbox"> Fix MCL code for images (check <a href="https://www.youtube.com/watch?v=JhkxtSn9eo8">Youtube</a> and github) (file <code>my-MCL.py</code>)</li>
<li><input checked="" disabled="" type="checkbox"> Create a panorama (optional)<ul>
<li>Check openCV Stitching (e.g from prev group: <a href="http://cs.gettysburg.edu/~tneller/archive/cs371/cozmo/22sp/fuller/Stitching.py">L.Attai,R.Fuller,C.Rhodes</a>)</li>
</ul>
</li>
<li><input checked="" disabled="" type="checkbox"> Crop pano img + sensor img in MCL algo (Nick)</li>
<li><input checked="" disabled="" type="checkbox"> Recreate MCL as per supplied examples (based on <a href="http://cs.gettysburg.edu/~tneller/archive/cs371/cozmo/22sp/fuller/MCLocalize.py">L.Attai,R.Fuller,C.Rhodes</a>))</li>
</ul>
<p align="right"><a href="#top-page">[Back to top]</a></p>
<hr>
<h2 id="files-and-dirs">Files and Dirs</h2>
<ul>
<li><code>/cozmo-images-kidnap/</code>: images for kidnap problem (collected data that is updated for each run)<ul>
<li><code>x.jpg</code>: images captured when running <code>kidnap.py</code> for localization.</li>
<li><code>data.csv</code>: result of localization, used to generate histogram.</li>
<li><code>hist.png</code>: histogram of collected belief probablities after MCL is run</li>
</ul>
</li>
<li><code>/model/</code>: directory for training/inference neural network model.</li>
<li><code>cozmo_MCL.py</code>: MCL implementation, based on previous group's work (stitching method).</li>
<li><code>kidnap.py</code>: main file to run localization (stitching method).</li>
<li><code>cozmo_MCL_nn.py</code>: MCL ver. for neural-net w/ much cleaner code.</li>
<li><code>kidnap_nn.py</code>: main file to run localization (neural-net).</li>
<li><code>MCL_simple.py</code>: simple version of MCL (not the MCL described in Prof. Neller's lecture -> not accurate)</li>
<li><code>requirements.txt</code>: has all required installs for use with our code and Cozmo</li>
<li><code>html-generator.py</code>: convert <code>md</code> to <code>html</code> file using <a href="https://codebeautify.org/markdown-to-html">codebeautify.org</a>. This yields much better html than python library like <a href="https://python-markdown.github.io/">python-markdown</a></li>
<li><code>index.html</code>: result of <code>html-generator.py</code>.</li>
</ul>
<p align="right"><a href="#top-page">[Back to top]</a></p>
<hr>
<h2 id="approaches">Approaches</h2>
<ul>
<li><p>Stitching method:</p>
<ul>
<li><details><summary>Functionality</summary>
<p> Our Cozmo MCL was able to localize with reasonable accuracy in some environments (in cases where the Stitching works well). Locations with repetative patterns or extreme amounts of light greatly reduced accuracy of the localization. </p>
<p> Our group was able to improve upon a past group's MCL and make it give Cozmo a command to turn to home accordingly toward from its most believed location. Our group also created a histogram with clustered probablities and implemented the highest belief into the MCL to serve as the location to turn to.</p>
</details>
</li>
<li><details><summary>Results</summary>
<p> Building off of the work of Leah Attai, Casey Rhodes, and Rachel Fuller, as well as examples provided by Mohammad Altaleb, we were able to construct a Monte Carlo Localization System that successfully localized the Anki Cozmo robot in some environments. Our system centered around a set of 30 photos taken 12 degrees apart to form a panorama. The panorama mapped the 360 degree area around the robot prior to its "kidnapping." By rotating the robot an arbitrary number of degrees, we then "kidnapped" the robot. The robot would then take a single picture, and we performed a set of pixel-matching operations to determine which section of the panorama most likely corresponded with the robot's kidnapped heading. We would repeat this process ten times, rotating the robot slighty each time to collect varied data and increase the accuracy of our final identification. Thus having localized, we turned the robot to turn towards its initial "home" position, represented as the starting point of the panorama, reversing the kidnapping.</p>
<p> Our system's performance was generally dependent upon the success of two functions. Firstly, in order to create the panorama it was necessary to stitch together the 30 initial photos. Our stitching function, based on an example from OpenCV, struggled in certain environments that reduced the distinctions between certain photos. Too much or too little light, or a lack of identifiable landmarks would stymie the algorithm, and thus produce a low-confidence panorama that induced difficulties in our localization attempts. Secondly, out pixel-matching comparison algorithm would struggle from much the same issues, running into situations were a large area of the panorama (such as a white wall) appeared as candidates for localizations. This issue was likely the cause of several failed tests where the Cozmo robot localized close but not exactly to its home position, within 20 degrees or so.</p>
<p> The following table shows examples of test images taken in sequence that were and were not conducive to panorama-based localization. The top two, taken in the CS Lounge, are visually distinct with enough "landmarks" such as the striped couch, trash can, person, and chair to allow our algorithm to successfully localize. The bottom two, taken next to the CS Lounge printer, are too similar for the algorithm to extract any meaningful data -- localizations in this area appeared to be little more than guesses.</p>
<table align="center">
<tr>
<td><img src="readme-assets/62.jpg"></td>
<td><img src="readme-assets/63.jpg"></td>
</tr>
<tr>
<td><img src="readme-assets/25.jpg"></td>
<td><img src="readme-assets/26.jpg"></td>
</tr>
</table>
<p> This Histogram shows the results of one test of our localization system. The X-Axis is the width of the panorama corresponding to the 360 degree area in pixels, the Y-Axis the total probabilistic likelihood that the Cozmo is facing that area of the panorama. Blue lines represent the Cozmo's confidence in its location before localization, orange represents its confidence after localization. The point with the highest probability, here just after pixel 1400, was selected as the kidnapped location.</p>
<p align="center">
<img src="readme-assets/hist.png" width="" height="400"/>
</p></li>
</ul>
</details>
</li>
<li><p>NeuralNet method:</p>
<ul>
<li><details><summary>Functionality</summary>
<p> Theoretically, this approach works well when we tested on Google Colab. The advantage of this method is that it compares the images based on the similarity, which should be more accurate than compare pixel-by-pixel like in stitching method.</p>
<p> In MCL function for this approach, the return degree for Cozmo bot after determining current position is not yet verified.</p>
</li>
</ul>
</details>
<ul>
<li><details><summary>Result</summary>
N/A
</details></li>
</ul>
</li>
</ul>
<p align="right"><a href="#top-page">[Back to top]</a></p>
<hr>
<h2 id="running">Running</h2>
<ul>
<li>Stitching method: After code cleaning, errors were found and the it got worse while trying to fix them. Running the program (cleaned ver.):<pre><code class="language-bash">python kidnap.py
</code></pre>
If it does not work well, try the one under <code>/working_ver/</code>, which is believed to contain the working code. Remember to ensure that the file paths work.</li>
<li>NeuralNet method:<ul>
<li>Install extra packages. Note that this installs PyTorch, so make sure GPU is enabled:<pre><code class="language-bash">pip install -r ./model/requirements.txt
</code></pre>
</li>
<li>Run kidnapping problem:<pre><code class="language-bash">python kidnap_nn.py
</code></pre>
</li>
</ul>
</li>
</ul>
<p align="right"><a href="#top-page">[Back to top]</a></p>
<hr>
<h2 id="future-goals">Future Goals</h2>
<details><summary>Stitching method</summary>
<p> The stitching algorithm used in this project would sometimes struggle with environments with few landmarks or excess/lack of light, thus not stitching all images together (issue from OpenCV). This would create a panorama that was not a true 360 degree view. Future groups could attempt a different stitching algorithm or attempt to build a world map in a different way. Our group recommendation is to avoid the use of a panorama (stitched from images) all together, as it proved too brittle for varied environments and was never as accurate as we had wanted. Other groups in this semester reported similar negative findings on the use of a panorama as well.</p>
<p> Future groups could also rework our MCL to where Cozmo does not stop localizing until a certain belief probability/number of predictions for a location is reached. Our current implementation only rotates 10 times to localize before committing to the final belief probabilities.</p>
<p> Our group's localization also relied on a program to randomly determine a kidnap location and then would automatically run MCL to localize. Future groups could have Cozmo map it's environment, then be in a state where it tries localize if it believes it is not at "home."</p>
</details>
<details><summary>NeuralNet method</summary>
<p> We would love to hear results from those groups that want to continue our work. Here are some suggestions to try:</p>
<ul>
<li>Try from the most basic step: take 72 images (rotate 5 degrees/each image) and run the algorithm to check the accuracy (check both <em>estimated position</em> and <em>angle to rotate</em>).</li>
<li>Try with less images, can be 30, 20, 8 images. The less number of images, the more you have to do to justify to rotate robot back to the original position. I suggest smaller steps:<ol>
<li>Use our model to determine the closest image to the original position</li>
<li>Use sliding window on that image (and adjacent left, right images) to determine the exact position (like <a href="http://cs.gettysburg.edu/~durhbe01/cozmo/">Ben Durham's</a>).</li>
</ol>
</li>
<li>Try with different light levels.</li>
<li>Keep fine-tuned model to achieve better representative vector. </details></li>
</ul>
<p align="right"><a href="#top-page">[Back to top]</a></p>
<hr>
<h2 id="working-log">Working log</h2>
<details>
<summary>Click to expand</summary>
<table>
<thead>
<tr>
<th align="left">Data/Time</th>
<th align="left">Activity</th>
<th align="right">Member</th>
</tr>
</thead>
<tbody><tr>
<td align="left">3/17: 1:00-2:00pm</td>
<td align="left">Setup Project</td>
<td align="right">Brayton & Nick</td>
</tr>
<tr>
<td align="left">3/23: 4:15-5:15pm</td>
<td align="left">Setup Project & Doc of steps</td>
<td align="right">Doug & Nick</td>
</tr>
<tr>
<td align="left">3/24: 1:00-5:00pm</td>
<td align="left">Connect to Cozmo, implement basic MCL & pic collection</td>
<td align="right">Doug, Quan & Nick</td>
</tr>
<tr>
<td align="left">3/29: 2:00-5:00pm</td>
<td align="left">MCL, refine code for pic collection</td>
<td align="right">Doug, Quan & Nick</td>
</tr>
<tr>
<td align="left">3/31: 2:00-4:45pm</td>
<td align="left">Done take1Pic, kidnap</td>
<td align="right">Doug, Quan & Nick</td>
</tr>
<tr>
<td align="left">3/31: 5:15-6:30pm</td>
<td align="left">Modify and fix bug in MCL</td>
<td align="right">Quan</td>
</tr>
<tr>
<td align="left">4/1: 1:00-1:50am</td>
<td align="left">Add MSE+Cos_Similar, try MCL, bad result --> suggest creating pano</td>
<td align="right">Quan</td>
</tr>
<tr>
<td align="left">4/5: 3:30-5:30pm</td>
<td align="left">Image stiching for creation of pano</td>
<td align="right">Quan & Nick</td>
</tr>
<tr>
<td align="left">4/7: 1:20-3:30pm</td>
<td align="left">Image cropping, MCL redo</td>
<td align="right">Nick, Quan, Brayton</td>
</tr>
<tr>
<td align="left">4/7: 3:30-4:30 pm</td>
<td align="left">MCL redo</td>
<td align="right">Nick, Quan</td>
</tr>
<tr>
<td align="left">4/12: 2:20-4:50pm</td>
<td align="left">Working on new MCL based on previous group efforts</td>
<td align="right">Nick</td>
</tr>
<tr>
<td align="left">4/12: 7:00-7:50pm</td>
<td align="left">MCL debugging</td>
<td align="right">Nick</td>
</tr>
<tr>
<td align="left">4/14: 1:00-2:30pm</td>
<td align="left">MCL testing/debugging</td>
<td align="right">Brayton</td>
</tr>
<tr>
<td align="left">4/14: 1:00-5:20pm</td>
<td align="left">MCL testing/debugging</td>
<td align="right">Nick</td>
</tr>
<tr>
<td align="left">4/16: 9:20-10:00pm</td>
<td align="left">Make Cozmo relocalize after MCL</td>
<td align="right">Nick</td>
</tr>
<tr>
<td align="left">4/16: 2:00-4:30pm</td>
<td align="left">Make Cozmo relocalize after MCL</td>
<td align="right">Nick</td>
</tr>
<tr>
<td align="left">4/21: 1:00-2:30pm</td>
<td align="left">Cozmo localization tuning, website with documentation</td>
<td align="right">Nick, Doug, Brayton</td>
</tr>
<tr>
<td align="left">4/21: 2:30-5:40pm</td>
<td align="left">Cozmo localization tuning, bins for histogram</td>
<td align="right">Nick</td>
</tr>
<tr>
<td align="left">4/21: 5:00-7:00pm</td>
<td align="left">Modifying pic collection</td>
<td align="right">Brayton</td>
</tr>
<tr>
<td align="left">4/22: 8:00-9:00pm</td>
<td align="left">Modified kidnap</td>
<td align="right">Brayton</td>
</tr>
<tr>
<td align="left">4/23: 7:00-10:00pm</td>
<td align="left">Modified MCL to use new map system</td>
<td align="right">Brayton</td>
</tr>
<tr>
<td align="left">4/24: 9:30-10:30am</td>
<td align="left">Changing MCL and supplementary files</td>
<td align="right">Brayton</td>
</tr>
<tr>
<td align="left">4/24: 1:00-2:00pm</td>
<td align="left">Documentation and archiving of work</td>
<td align="right">Nick, Quan, Brayton</td>
</tr>
<tr>
<td align="left">4/24: 10:00-12:00pm</td>
<td align="left">Write website framework, outline, review final code</td>
<td align="right">Doug</td>
</tr>
<tr>
<td align="left">4/25: 5:15-8:30pm</td>
<td align="left">Clean code, add html-generator, create dataset (1440 imgs)</td>
<td align="right">Quan</td>
</tr>
<tr>
<td align="left">4/25: 7:00-12:00am</td>
<td align="left">Implement sentence_transformers, found unuseful with pretrained (clip-ViT-B-32)</td>
<td align="right">Brayton</td>
</tr>
<tr>
<td align="left">4/26: 10:00-3:00am</td>
<td align="left">Populate website, write copy, finalize website objects and sections</td>
<td align="right">Doug</td>
</tr>
<tr>
<td align="left">4/26: 11:00-12:15pm</td>
<td align="left">Add baseline to train siamese network</td>
<td align="right">Quan</td>
</tr>
<tr>
<td align="left">4/26: 1:30-3:30pm</td>
<td align="left">Add training loop</td>
<td align="right">Quan</td>
</tr>
<tr>
<td align="left">4/27: 4:30-6:30am</td>
<td align="left">Fine-tuned model on our dataset (success)</td>
<td align="right">Quan</td>
</tr>
<tr>
<td align="left">4/28: 7:30-8:30pm</td>
<td align="left">Write doc + clean code + publish website</td>
<td align="right">Quan, Nick</td>
</tr>
<tr>
<td align="left">5/3: 1:30-5:30pm</td>
<td align="left">Clean + factorize code --> found errors, integrate NN model into MCL</td>
<td align="right">Quan</td>
</tr>
<tr>
<td align="left">5/4: 2:30-6:00pm</td>
<td align="left">Debugging, creating separate branch for finished work in Github</td>
<td align="right">Nick</td>
</tr>
<tr>
<td align="left">5/4: 1:00-7:45pm</td>
<td align="left">Continue work on 5/3</td>
<td align="right">Quan</td>
</tr>
<tr>
<td align="left">5/5: 1:00-2:00pm</td>
<td align="left">Finalize NeuralNet approach. Ran test on GG-Colab; did not have chance to test w/ robot</td>
<td align="right">Quan</td>
</tr>
<tr>
<td align="left">5/5: 2:00-3:00pm</td>
<td align="left">Cont. cleaning code -> give up since the <code>cozm_MCL.py</code> and <code>kidnap.py</code> are such a huge mess & finalize README</td>
<td align="right">Quan</td>
</tr>
</tbody></table>
</details>
<p align="right"><a href="#top-page">[Back to top]</a></p>
<hr>
<footer>
<p id="footnote-zipfile"><a href="#footnote-zipfile-ref">[1]</a> Src code (zip) is only available on Gettysburg College server (<a href="http://cs.gettysburg.edu/~nguyqu03/cs371/MonteCosmoLocalization.zip">download</a>) (last update: 5/6/2023).</p>
</footer>
</body>
</html>