research

I’m currently employed on the research project TAIGER: Training and Guiding AI Agents with Ethical Rules, funded by the Vienna Science and Technology Fund WWTF.

My current interests lie broadly in hybrid AI; that is, the integration of logic with machine learning. Most of my work so far has involved developing methods for restricting the actions of reinforcement learning agents with (social, ethical, or legal) norms. My earlier work focused on using reasoning engines for deontic logic to restrict the behaviour of RL agents at runtime, but I’ve since transitioned into actually teaching the agents the normative behaviour, by incentivising compliance as a separate objective through automated reasoning or automata.

More generally, my interests lie in ethical AI, logics for normative reasoning and systems for automating reasoning with them, normative/ethical/safe RL, and multi-objective RL. I have a growing interest in applying my techniques in real-world problems, so if you do applied research in RL for e.g. technologies for sustainable cities, I’d love to hear more about it, and possibly propose a collaboration.

the normative supervisor

In many of my papers I have made use of a module I call the ‘normative supervisor’; it is a normative reasoning module that can interface with reinforcement learning agents, taking environmental data from an agent, along with a set of norms, and using a reasoning engine to determine which actions in the agent’s arsenal are compliant.

So far as a reasoning engine I have employed the theorem prover SPINdle but my colleagues and I are looking at expanding to e.g. ASP solvers.

You can find an implementation of the normative supervisor here, along with an implementation of a DDL-to-LTL synthesis algorithm that utilizes it. Please note that the module is still under active development.