• About
  • Disclaimer
  • Privacy Policy
  • Contact
Saturday, June 14, 2025
Cyber Defense GO
  • Login
  • Home
  • Cyber Security
  • Artificial Intelligence
  • Machine Learning
  • Data Analysis
  • Computer Networking
  • Disaster Restoration
No Result
View All Result
  • Home
  • Cyber Security
  • Artificial Intelligence
  • Machine Learning
  • Data Analysis
  • Computer Networking
  • Disaster Restoration
No Result
View All Result
Cyber Defense Go
No Result
View All Result
Home Artificial Intelligence

Phillip Burr, Head of Product at Lumai – Interview Collection

Md Sazzad Hossain by Md Sazzad Hossain
0
Phillip Burr, Head of Product at Lumai – Interview Collection
585
SHARES
3.2k
VIEWS
Share on FacebookShare on Twitter

You might also like

Why Creators Are Craving Unfiltered AI Video Mills

6 New ChatGPT Tasks Options You Have to Know

combining generative AI with live-action filmmaking


Phillip Burr is the Head of Product at Lumai, with over 25 years of expertise in international product administration, go-to-market and management roles inside main semiconductor and expertise corporations, and a confirmed observe report of constructing and scaling services and products.

Lumai is a UK-based deep tech firm growing 3D optical computing processors to speed up synthetic intelligence workloads. By performing matrix-vector multiplications utilizing beams of sunshine in three dimensions, their expertise provides as much as 50x the efficiency and 90% much less energy consumption in comparison with conventional silicon-based accelerators. This makes it significantly well-suited for AI inference duties, together with giant language fashions, whereas considerably lowering power prices and environmental impression.

What impressed the founding of Lumai, and the way did the thought evolve from College of Oxford analysis right into a industrial enterprise?

The preliminary spark was ignited when one of many founders of Lumai, Dr. Xianxin Guo, was awarded an 1851 Analysis Fellowship on the College of Oxford. The interviewers understood the potential for optical computing and requested whether or not Xianxin would think about patents and spinning out an organization if his analysis was profitable. This received Xianxin’s artistic thoughts firing and when he, alongside one in all Lumai’s different co-founders Dr. James Spall, had confirmed that utilizing mild to do the computation on the coronary heart of AI might each dramatically enhance AI efficiency and scale back the power, the stage was set. They knew that current silicon-only AI {hardware} was (and nonetheless is) struggling to extend efficiency with out considerably rising energy and price and, therefore, if they might resolve this drawback utilizing optical compute, they might create a product that clients needed. They took this concept to some VCs who backed them to type Lumai. Lumai just lately closed its second spherical of funding, elevating over $10m, and bringing in extra traders who additionally imagine that optical compute can proceed to scale and meet rising AI efficiency demand with out rising energy.

You’ve had a powerful profession throughout Arm, indie Semiconductor, and extra — what drew you to affix Lumai at this stage?

The quick reply is workforce and expertise. Lumai has a powerful workforce of optical, machine studying and knowledge heart consultants, bringing in expertise from the likes of Meta, Intel, Altera, Maxeler, Seagate and IBM (together with my very own expertise in Arm, indie, Mentor Graphics and Motorola).  I knew {that a} workforce of outstanding individuals so centered on fixing the problem of slashing the price of AI inference might do wonderful issues.

I firmly imagine that way forward for AI calls for new, modern breakthroughs in computing. The promise of having the ability to supply 50x the AI compute efficiency in addition to chopping the price of AI inference to 1/tenth in comparison with at this time’s options was simply too good a possibility to overlook.

What had been among the early technical or enterprise challenges your founding workforce confronted in scaling from a analysis breakthrough to a product-ready firm?

The analysis breakthrough proved that optics might be used for quick and really environment friendly matrix-vector multiplication. Regardless of the technical breakthroughs, the most important problem was convincing those who Lumai might succeed the place different optical computing startups had failed. We needed to spend time explaining that Lumai’s strategy was very completely different and that as a substitute of counting on a single 2D chip, we used 3D optics to achieve the degrees of scale and effectivity. There are in fact many steps to get from lab analysis to expertise that may be deployed at scale in a knowledge heart. We acknowledged very early on that the important thing to success was bringing in engineers who’ve expertise in growing merchandise in excessive quantity and in knowledge facilities. The opposite space is software program – it’s important that the usual AI frameworks and fashions can profit from Lumai’s processor, and that we offer the instruments and frameworks to make this as seamless as potential for AI software program engineers.

Lumai’s expertise is alleged to make use of 3D optical matrix-vector multiplication. Are you able to break that down in easy phrases for a common viewers?

AI techniques must do a whole lot of mathematical calculations referred to as matrix-vector multiplication. These calculations are the engine that powers AI responses. At Lumai, we do that utilizing mild as a substitute of electrical energy. This is the way it works:

  1. We encode info into beams of sunshine
  2. These mild beams journey by means of 3D area
  3. The sunshine interacts with lenses and particular supplies
  4. These interactions full the mathematical operation

Through the use of all three dimensions of area, we are able to course of extra info with every beam of sunshine. This makes our strategy very environment friendly – lowering the power, time and price wanted to run AI techniques.

What are the primary benefits of optical computing over conventional silicon-based GPUs and even built-in photonics?

As a result of the speed of development in silicon expertise has considerably slowed, every step up in efficiency of a silicon-only AI processor (like a GPU) ends in a big improve in energy. Silicon-only options eat an unimaginable quantity of energy and are chasing diminishing returns, which makes them extremely advanced and costly. The benefit of utilizing optics is that when within the optical area there’s virtually no energy being consumed. Vitality is used to get into the optical area however, for instance, in Lumai’s processor we are able to obtain over 1,000 computation operations for every beam of sunshine, each single cycle, thus making it very environment friendly. This scalability can’t be achieved utilizing built-in photonics attributable to each bodily dimension constraints and sign noise, with the variety of computation operations of silicon-photonic resolution at solely at 1/eighth of what Lumai can obtain at this time.

How does Lumai’s processor obtain near-zero latency inference, and why is that such a essential issue for contemporary AI workloads?

Though we wouldn’t declare that the Lumai processor provides zero-latency, it does execute a really giant (1024 x 1024) matrix vector operation in a single cycle. Silicon-only options usually divide up a matrix into smaller matrices, that are individually processed step-by-step after which the outcomes should be mixed. This takes time and ends in extra reminiscence and power getting used. Lowering the time, power and price of AI processing is essential to each permitting extra companies to learn from AI and for enabling superior AI in essentially the most sustainable approach.

Are you able to stroll us by means of how your PCIe-compatible type issue integrates with current knowledge heart infrastructure?

The Lumai processor makes use of PCIe type issue playing cards alongside a regular CPU, all inside a regular 4U shelf. We’re working with a spread of knowledge heart rack gear suppliers in order that the Lumai processor integrates with their very own gear. We use commonplace community interfaces, commonplace software program, and many others. in order that externally the Lumai processor will simply appear to be another knowledge heart processor.
Information heart power utilization is a rising international concern. How does Lumai place itself as a sustainable resolution for AI compute?

Information heart power consumption is rising at an alarming price. Based on a report from the Lawrence Berkeley Nationwide Laboratory, knowledge heart energy use within the U.S. is predicted to triple by 2028, consuming as much as 12% of the nation’s energy. Some knowledge heart operators are considering putting in nucleus energy to offer the power wanted. The business wants to take a look at completely different approaches to AI, and we imagine that optics is the reply to this power disaster.

Are you able to clarify how Lumai’s structure avoids the scalability bottlenecks of present silicon and photonic approaches?

The efficiency of the primary Lumai processor is just the beginning of what’s achievable. We anticipate that our resolution will proceed to offer enormous leaps in efficiency: by rising optical clock speeds and vector widths, all with no corresponding improve in power consumed. No different resolution can obtain this. Customary digital silicon-only approaches will proceed to eat increasingly value and energy for each improve in efficiency. Silicon photonics can’t obtain the vector width wanted and therefore corporations who had been built-in photonics for knowledge heart compute have moved to deal with different elements of the information heart – for instance, optical interconnect or optical switching.

What function do you see optical computing enjoying in the way forward for AI — and extra broadly, in computing as an entire?

Optics as an entire will play an enormous half in knowledge facilities going ahead – from optical interconnect, optical networking, optical switching and naturally optical AI processing. The calls for that AI is putting on the information heart is the important thing driver of this transfer to optical.  Optical interconnect will allow quicker connections between AI processors, which is crucial for giant AI fashions. Optical switching will allow extra environment friendly networking, and optical compute will allow quicker, extra power-efficient and lower-cost AI processing.  Collectively they are going to assist allow much more superior AI, overcoming the challenges of the slowdown in silicon scaling on the compute aspect and the velocity limitations of copper on the interconnect aspect.

Thanks for the good interview, readers who want to study extra ought to go to Lumai.

Tags: BurrInterviewLumaiPhillipProductSeries
Previous Post

Dataiku Brings AI Agent Creation to AI Platform

Next Post

juniper qfx1k sflow – Community Engineering Stack Change

Md Sazzad Hossain

Md Sazzad Hossain

Related Posts

Why Creators Are Craving Unfiltered AI Video Mills
Artificial Intelligence

Why Creators Are Craving Unfiltered AI Video Mills

by Md Sazzad Hossain
June 14, 2025
6 New ChatGPT Tasks Options You Have to Know
Artificial Intelligence

6 New ChatGPT Tasks Options You Have to Know

by Md Sazzad Hossain
June 14, 2025
combining generative AI with live-action filmmaking
Artificial Intelligence

combining generative AI with live-action filmmaking

by Md Sazzad Hossain
June 14, 2025
Photonic processor may streamline 6G wi-fi sign processing | MIT Information
Artificial Intelligence

Photonic processor may streamline 6G wi-fi sign processing | MIT Information

by Md Sazzad Hossain
June 13, 2025
Construct a Safe AI Code Execution Workflow Utilizing Daytona SDK
Artificial Intelligence

Construct a Safe AI Code Execution Workflow Utilizing Daytona SDK

by Md Sazzad Hossain
June 13, 2025
Next Post
community – F5 Failing SSL Handshake After “Consumer Good day”

juniper qfx1k sflow - Community Engineering Stack Change

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recommended

Fortigate NGFW Resolution – 51 Safety

Fortigate NGFW Resolution – 51 Safety

February 1, 2025
Repair UI Clipping, Z-Index & Occasion Issues

Repair UI Clipping, Z-Index & Occasion Issues

May 13, 2025

Categories

  • Artificial Intelligence
  • Computer Networking
  • Cyber Security
  • Data Analysis
  • Disaster Restoration
  • Machine Learning

CyberDefenseGo

Welcome to CyberDefenseGo. We are a passionate team of technology enthusiasts, cybersecurity experts, and AI innovators dedicated to delivering high-quality, insightful content that helps individuals and organizations stay ahead of the ever-evolving digital landscape.

Recent

Addressing Vulnerabilities in Positioning, Navigation and Timing (PNT) Companies

Addressing Vulnerabilities in Positioning, Navigation and Timing (PNT) Companies

June 14, 2025
Discord Invite Hyperlink Hijacking Delivers AsyncRAT and Skuld Stealer Concentrating on Crypto Wallets

Discord Invite Hyperlink Hijacking Delivers AsyncRAT and Skuld Stealer Concentrating on Crypto Wallets

June 14, 2025

Search

No Result
View All Result

© 2025 CyberDefenseGo - All Rights Reserved

No Result
View All Result
  • Home
  • Cyber Security
  • Artificial Intelligence
  • Machine Learning
  • Data Analysis
  • Computer Networking
  • Disaster Restoration

© 2025 CyberDefenseGo - All Rights Reserved

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In