I’m a developer and researcher working in the field of human-computer interaction. My main interests are sensors, humane interfaces, augmented reality, and ubiquitous computing.
📍 San Francisco
- O Soli Mio - Radar Powered Gestural Interfaces for Music
- Emulating Touché - Open-Source Capacitive Sensing Interactions with Plants and Water
- Whistlr - iOS Contact Sharing over Audio
- Push-To-Talk Audio Chat App
- Stitch - founding eng of Stitch, an open-source tool for designers.
- Roboflow's Swift SDK - Made the first version of an SDK for using Roboflow trained models on iOS devices.
- AudioKit - Open-Souce Apple Framework for Audio Analysis, Synthesis, and Processing. - helped launch AudioKit, easy audio processing for Apple frameworks.
- TiktokenSwift - package for OpenAI's tiktoken library.
- Visual iMessage - What if Siri could describe images in an iMessage thread?
- Inception Labs SwiftUI Diffusion Demo - SwiftUI interfaces for Inception Labs' Language Diffusion Model
- ASL Classifier Demo - using CoreML to detect ASL signs on iOS devices
- Touché Experiements- homemade Touché (swept-frequency capacity sensing) plant interactions
- GRT on iOS - recognizing phone motion gestures on iOS
- Flame Sensor BLE Study - detetcting flames and sending updates via BLE
- ESC10-CoreML - recognizing sound events on iOS with CoreML
- Analog and NeoPixel LED control - controlling LED strips with Arduinos
I write here.
Send me a note at nicholasarner (at) gmail (dot) com, or find me on Twitter.