SCHEDULE AT-A-GLANCE
Day 3 of ONE Summit | |
---|---|
2 hour on-site discussion | SAN JOSE MCENERY CONVENTION CENTER |
Day 3 of ONE Summit
Wednesday, May 1
Zoom Link: https://zoom.us/j/98538301700?pwd=RXlFdHpZRDlHTzFaVFRnakw2b0F5QT09
Recording: TBD
Time(UTC-7) | Topics | ||||||
---|---|---|---|---|---|---|---|
15:00-15:10 | Welcome note | ||||||
15:10-15:30 | Jeff Brower, Device AI applications running at the AI Edge on very small form-factor devices (for example pico ITX), and without an online cloud connection, need to perform automatic speech recognition (ASR) under difficult conditions, including background noise, urgent or stressed voice input, and other talkers in the background. For robotics applications, background noise may also include servo motor and other mechanical noise. Under these conditions, efficient open source ASRs such as Kaldi and Whisper tend to produce "sound-alike" errors, for example: in the early days a king rolled the stake To address this issue independently of ASR model, Signalogic is developing a Small Language Model (SLM) to correct sound-alike errors, capable of running in a very small form-factor and under 10W, for example using two (2) Atom CPU cores. The SLM must run every 1/2 second and with a backwards/forwards context of 3-4 words. Unlike an LLM, a wide context window, domain knowledge, and extensive web page training are not needed.
| ||||||
15:30-15:50 | Hidetsugu Sugiyama, Chief Technology Strategist - Global TME, Red Hat
| ||||||
15:50-16:20 | Vijay Pal, Predictive Maintenance of Hardware | ||||||
16:20-17:00 | ・Discussion about Akraino 2024 activities ・Collaboration with LF Edge AI Edge and EdgeLake | ||||||
Closing |
Call for proposal
No | Name | Company | Presentation title | Abstract | Preferred Time Zone | Comments | |
---|---|---|---|---|---|---|---|
1 | Jeff Brower | Signalogic | jbrower at signalogic dot com | Small Language Model for Device AI Applications | Device AI applications running at the AI Edge on very small form-factor devices (for example pico ITX), and without an online cloud connection, need to perform automatic speech recognition (ASR) under difficult conditions, including background noise, urgent or stressed voice input, and other talkers in the background. For robotics applications, background noise may also include servo motor and other mechanical noise. Under these conditions, efficient open source ASRs such as Kaldi and Whisper tend to produce "sound-alike" errors, for example: in the early days a king rolled the stake To address this issue independently of ASR implementation, Signalogic is developing a Small Language Model (SLM) to correct sound-alike errors, capable of running in a very small form-factor and under 10W, for example using two (2) Atom CPU cores. The SLM must run every 1/2 second and with a backwards/forwards context of 3-4 words. Unlike an LLM, a wide context window, domain knowledge, and extensive web page training are not needed. | PDT | |
2 | |||||||
3 | |||||||
4 | |||||||
5 | |||||||
6 | |||||||
7 | |||||||
8 | |||||||
9 | |||||||
10 | |||||||