Job description

Company Overview:

Ambiq has been on a singular mission since 2010 to put intelligence everywhere by creating the most energy-efficient semiconductor solutions for IoT endpoint devices. Using the revolutionary Subthreshold Power Optimized Technology (SPOT®) Platform, Ambiq’s record-breaking ultra-low power solutions, including MCU and SoCs, have helped global device makers deliver more than 150 million products with advanced features, enhanced performance, and extended battery life.

With a leading market share in wearables at the speed of shipping 1 million units per month, Ambiq is now expanding its impact on novel endpoint products such as hearables, smart home automation, industrial IoT preventive monitoring, and more.

Our innovative and fast-moving teams of research, development, production, marketing, sales and operations are spread across several continents, including the US (Austin and San Jose), Taiwan (Hsinchu), China (Shenzhen and Shanghai), Japan (Tokyo), and Singapore. We value continued technology innovation, fanatical attention to customer needs, collaborative decision making, and, above all, enthusiasm for energy efficiency. We embrace candidates who also share these same values. The successful candidate must be self-motivated, creative, and comfortable learning and driving exciting new technologies. We encourage and nurture an environment for growth and opportunities to work on complex, interesting, and challenging projects that will create a lasting impact. Come join us on our quest for 100 billion devices. The endpoint intelligence revolution starts here.


Specific Responsibilities:

  • Identify, refine, and/or develop sophisticated ML and DL models for deployment on highly constrained environments.
  • Train models using SOTA compression techniques to fit in specific memory, compute, and power envelopes, making trade-offs between compression and accuracy.
  • Publish and maintain these models in a ModelZoo/Garden, including Jupyter Notebooks, documentation, and other assets needed by our customers to bootstrap their internal AI features.
  • Socialize their achievements via conferences, meetups, workshops, and publications.

Required Skills and Abilities:

  • Experience with SOTA pruning, distillation, quantization approaches for CNNs and RNNs.
  • Experience with one or more of the following AI task domains: audio classification, speech, and/or time series tasks, including domain-specific feature extraction related to those tasks.
  • Tensorflow (TFLite, TFLite for Microcontrollers, MicroTVM, and/or PyTorch/Glow are a plus).
  • Dataset creation and curation.

Bonus Qualifications:

  • Past #TinyML involvement or experience
  • Experience developing and optimizing for TFLite for Microcontrollers
  • Experience with compression of attention-based architectures
  • Experience with Glow or other model-to-binary compilers
  • Experience with ONNX, ONNX runtime, and/or MLIR
  • Experience with optimizing for heterogenous AI compute (e.g. CPU+NPU+DSP)

Education and Experience:

  • A bachelor’s degree in computer science or a related field is required with at least 2 years of relevant experience. A master’s degree or PhD in related topics is highly desirable.
  • Experience developing ML and DL models in Tensorflow and/or PyTorch.
  • Experience with model compression techniques.
  • Experience creating and maintaining datasets in the audio or time-series domains is highly desirable.

Please let the company know that you found this position on this Job Board as a way to support us, so we can keep posting cool jobs.