Compute-in-Memory optimizeD neural accelerators
By merging memory and computation, cimona breaks through the constraints of conventional hardware and opens a new path for the next generation of efficient AI for the edge
Unique Selling Points
Breaking old limits. Unlocking new possibilities.
Edge AI pushes beyond the limits of conventional chips — demanding a new foundation for computation.
Lower Latency
Because compute-in-memory systems operate with very high parallelism, our hardware can run significantly faster.
Less Energy Consuming
Compute-in-memory reduces data movement, dramatically cutting power usage for truly efficient AI acceleration.
Co Design Approach
Our Software Kit lowers the entry barrier. Try it out and see the potential energy and speed gains for your model.
European Sovereignty
Developed in Europe, cimona strengthens independence and technological leadership in AI.
Research & Background
Cimona builds on years of research in compute-in-memory and chip design. Our founding team combines academic excellence with the ambition to transfer technology into real-world applications.
Why we build cimona
Our founding team is united by the belief that AI needs new foundations. Each of us shares a commitment to shaping a more independent and sustainable future of computing.
Paul-Philipp Manea, CEO
Zhenming Yu, COO
Dr. Sebastian Siegel, CTO
Jan Robert Finkbeiner, CPO
Jan Wegener, CBO
Prof. Dr. Emre Neftci, Mentor
Contact
Get In Touch
We are happy to answer questions and provide more details about cimona. Reach out if you are interested in collaboration or additional information.