Computerspeak by Alexandru Voica

Computerspeak by Alexandru Voica

Share this post

Computerspeak by Alexandru Voica
Computerspeak by Alexandru Voica
It's not just vibecoding: LLMs are getting really good at transpilation; data centers are reshaping America's power grid; China floods the zone with open source AI models
Copy link
Facebook
Email
Notes
More

It's not just vibecoding: LLMs are getting really good at transpilation; data centers are reshaping America's power grid; China floods the zone with open source AI models

European investors want to see the ROI of AI; inside Google's race to take on ChatGPT; many AI data centers in China remain unused; AI is changing Hollywood; US AI companies lobby for fewer rules

Alexandru Voica's avatar
Alexandru Voica
Mar 28, 2025
∙ Paid

Share this post

Computerspeak by Alexandru Voica
Computerspeak by Alexandru Voica
It's not just vibecoding: LLMs are getting really good at transpilation; data centers are reshaping America's power grid; China floods the zone with open source AI models
Copy link
Facebook
Email
Notes
More
1
Share

If Merriam-Webster were to poll the Hacker News community and Silicon Valley founder chat groups right now for their Word of the Day, there would be one undisputed winner: vibecoding.

While most of the hype so far has been concentrated on code generation, with companies such as Anysphere (the startup behind Cursor) or Loveable attracting significant VC funding and experiencing exponential usage, there’s another area that deserves more attention as it may have industry-wide consequences: code translation, also known as transpilation.

The obvious use case for LLM-powered transpilation is helping large enterprises migrate from older mainframe-based programming languages such as COBOL (which is still incredibly popular in banking and healthcare applications) to cloud-friendly alternatives. Java in particular has a strong ecosystem and frameworks which allow software engineers to safely transition codebases to the cloud, making it easier to maintain and scale their applications. IBM has demonstrated how their LLM-based watsonx Code Assistant can transform COBOL services to high-quality Java code using their Z Open Editor.

But what if instead of using LLMs to update code from the high-level programming languages of the past to more modern equivalents, software engineers could go deeper into the abstraction layers, and apply transpilation to deliver functionality similar to binary translation for low-level programming languages?

I recently met professor Abdulrahman Mahmoud who has been spearheading such an effort. In a recent paper, his team at MBZUAI describes CRT—a lightweight, LLM-driven transpiler engineered to perform direct translations from complex instruction set computing (CISC) architectures, predominantly used by x86 processors, to reduced instruction set computing (RISC) architectures such as Arm and RISC-V.

Refer to caption

This project directly addresses the practical issues arising from the ongoing shift within the technology industry from x86 to Arm and RISC-V, a transition that started in the data center space but is now increasingly visible in personal computing too, where Arm and RISC-V deliver not just better energy efficiency and thermal management but also enhanced performance.

Until recently, the only real-world solution to running x86 code on Arm-based devices was to use virtualization, represented by projects such as QEMU (an open source initiative that’s been around for two decades) or Apple’s Rosetta— a dynamic binary translator which ensured a smooth rollout of the M1 and M2-based MacBooks in a world where macOS still had a lot of legacy x86 code.

However, both projects have limitations: QEMU supports many CPU architectures but has a performance overhead, while Rosetta offers incredible performance but is a proprietary solution developed specifically for Apple Silicon and cannot be used for Windows-based devices for example.

CRT aims to bring the best of both worlds: wider compatibility across CPU architectures, and performance and power efficiency that’s as good as (or better than) Rosetta 2.

And so far, it’s delivering on that promise: in the paper referenced above, the MBZUAI researchers present benchmarks which show impressive accuracy rates: 79.25% effectiveness for translating from x86 to Arm, and an even more notable 88.68% accuracy for translations from x86 to RISC-V architectures.

In real-world deployment tests conducted on Apple's Arm-based M2 computers, CRT-generated assembly code consistently surpasses the performance benchmarks set by Apple’s proprietary Rosetta 2 virtualization layer, delivering a remarkable 1.73x speedup over Rosetta 2, along with substantial improvements in resource usage, specifically 2.41x enhanced memory efficiency and 1.47x superior energy efficiency (which is great, considering memory efficiency was a strong advantage for Rosetta).

These quantifiable performance gains underscore CRT’s potential as a transformative technology, which could significantly streamline and optimize software migration and compatibility management processes. The potential benefits of CRT are vast, especially for industries increasingly embracing Arm-based technologies, including data centers, mobile devices, IoT systems, and embedded solutions.

Furthermore, this research signifies a critical shift in the practical capabilities and potential applications of LLMs, as it demonstrates their proficiency in handling the nuanced, highly intricate world of low-level assembly instructions, a domain traditionally considered too complex and error-prone for automation via machine learning. By proving that LLMs can effectively manage the detailed syntactic and semantic differences inherent in diverse CPU architectures, this advancement could reshape the landscape of software engineering profoundly. It suggests a future where the complexity, time, and cost traditionally associated with porting extensive legacy software systems across rapidly evolving hardware platforms could be drastically reduced.

Beyond its immediate technical implications, CRT also paves the way for a new paradigm in software-hardware integration, one where legacy applications can seamlessly adapt to new hardware environments without extensive rewrites or heavy reliance on slow, resource-intensive virtualization layers. By enabling software compatibility at the assembly level, CRT could provide unprecedented flexibility and resilience within technological ecosystems, making it significantly easier for companies and software engineers to maintain and optimize their stacks in the face of ongoing hardware innovations or easily migrate between hardware platforms, without feeling locked in by proprietary languages, APIs or toolchains.

The ex-CPU engineer in me suddenly feels excited about silicon again.

And now, here are the week’s news:

❤️Computer loves

Our top news picks for the week - your essential reading from the world of AI

  • New York Times: How Artificial Intelligence Reasons

  • Forbes: How AI Data Centers Are Reshaping America’s Electric Grid

  • Bloomberg: China Floods the World With AI Models After DeepSeek Success

  • Wired: How Extropic Plans to Unseat Nvidia

  • Reuters: European investors say clock is ticking for AI adopters to deliver

  • MIT Technology Review: China built hundreds of AI data centers to catch the AI boom. Now many stand unused.

  • Fortune: AI avatars’ lack of authenticity won’t stop them from joining the creator economy—and giving humans a run for their money

  • New York Times: Emboldened by Trump, A.I. Companies Lobby for Fewer Rules

  • FT: Chinese AI start-ups overhaul business models after DeepSeek’s success

  • Bloomberg: Google Is Searching for an Answer to ChatGPT

  • FT: China is suffering its own ‘China shock’

  • The Information: How AI Is Changing Hollywood’s United Talent Agency

  • TechCrunch: a16z- and Benchmark-backed 11x has been claiming customers it doesn’t have

Keep reading with a 7-day free trial

Subscribe to Computerspeak by Alexandru Voica to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Alexandru Voica
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More