roderick-bloem.jpg

On the Future of Reactive Synthesis and Why Academia is like Sports

by Bettina Könighofer — 21 October 2024
Topics: interview

Could you introduce yourself and your research area?

Sure! My name is Roderick, I’m originally from the Netherlands. I studied computer science at Leiden University and later went on to do my PhD at the University of Colorado Boulder, which I completed in 2001. After that, in 2002, I joined TU Graz as a postdoc and gradually worked my way up. By 2008, I became a full professor, and from 2018 to 2023, I had the privilege of serving as the Department Head.

As for my research, it’s mainly in Computer Aided Verification. When I started, my group focused a lot on model checking and logics, but over time we expanded into areas like system repair and reactive synthesis, which is basically the process of turning specifications into working systems. Lately, I’ve been more involved in safe AI and security verification, which has been really exciting.

What is your vision for the future of reactive synthesis; where do you see the greatest potential for this technique?

When we first started working on reactive synthesis, the idea was that it could be an alternative to programming. You’d just write the specification, and the system would essentially build itself. We quickly realized it’s not that simple—writing really precise specifications can be harder than just coding the system yourself, and the tools we’ve developed for synthesis aren’t exactly the most efficient.

Looking forward, I think the challenge is figuring out where human skills and synthesis tools can complement each other. There are definitely tasks that are easier for people to code manually, and others that are much easier to specify. A great example of this balance is the work I did with Bettina Koenighofer on shielding for safe AI. In that case, we let the machine learner do most of the heavy lifting, and then we use the synthesis tool to add formal guarantees to the learning process. It’s a really powerful combination.

In the future, I see more of these collaborations between smart people and smart synthesis tools. For instance, we’ve looked at synchronization in multithreading, which is tricky for humans but something that might be easier for synthesis tools to handle.

Can you give us the core idea of your work on security verification?

Sure. The main problem we’re tackling is that attackers, especially those with physical access to a machine, can figure out sensitive information—like secret keys—by analyzing side channels such as power consumption or electromagnetic signals from the chip. There are countermeasures out there, but security often feels like a cat-and-mouse game, where new defenses are met with new attacks. I’ve been fortunate to work with Stefan Mangard, who has deep experience with both sides—attacks and countermeasures.

What’s interesting is how formal methods come into play here. They help us prove that certain types of attacks are impossible within a given model. For example, imagine an attacker can measure the value of a wire in a circuit. If that value correlates with secret information, they can exploit it. Formal verification tools ensure there’s no such correlation, so we can show that everything is implemented correctly. But security experts keep us grounded—they’re always thinking beyond the limits of any model we create.

Our current research focuses on what happens when you run specific software on particular hardware. What kind of information can leak in that scenario? Lately, we’ve been working on something called leakage contracts, which extend the instruction set architecture. These contracts don’t just describe the functional behavior of instructions—like how an addition handles overflow—but also what kind of information might leak during execution. This approach breaks the problem into two parts: first, ensuring the processor follows the contract, and second, verifying that the software is secure according to this contract. That’s a lot easier than verifying it directly against the processor itself. Of course, there’s still much to explore, from power side channels to timing attacks, and from the level of circuits and processors all the way up to compilers and operating systems. The big question is: how do we make sure the entire system is secure?

Why did you choose academia over industry?

One of the biggest reasons was my family. My wife is also an academic, and with two careers and kids, we felt that academia would offer us more flexibility. And it did. Even though it’s hard work, the ability to manage our schedules—one of us handling the kids while the other works, and then switching—has been a big advantage. I’m really happy with the choice. The best part about academia is the people. Students are amazing. They come in young and maybe a bit unsure of themselves, and I get to teach them a lot. By the time they’re finishing, they end up teaching me—it’s really rewarding to see that growth.

My colleagues, too, are brilliant and a lot of fun to work with. Recently, a colleague visited, and instead of working the whole time, we ended up exploring the churches in Graz and enjoying some great food. Conferences often feel like reunions with old friends, people I’ve known for a long time and really care about. There’s a wonderful sense of community, and it’s more collaborative than competitive. It’s always a pleasure.

Tom Henzinger once made a great analogy: academia is like sports. Sure, there’s competition, but we’re all playing on the same team. Like in a soccer team, where forwards might compete to score goals, the main goal is to play together. In academia, you get to choose your collaborators, and you never have to work with someone you don’t like. That makes people very collaborative, and it makes the whole experience a lot of fun.