Where artificial agents are not liable to be ascribed true moral agency and responsibility in their own right, we can understand them as acting as proxies for human agents, as making decisions on their behalf. What I call the ‘Moral Proxy Problem’ arises because it is often not clear for whom a specific artificial agent is acting as a moral proxy. In particular, we need to decide whether artificial agents should be acting as proxies for low-level agents — e.g. individual users of the artificial agents — or whether they should be moral proxies for high-level agents — e.g. designers, distributors or regulators, that is, those who can potentially control the choice behaviour of many artificial agents at once. Who we think an artificial agent is a moral proxy for determines from which agential perspective the choice problems artificial agents will be faced with should be framed: should we frame them like the individual choice scenarios previously faced by individual human agents? Or should we, rather, consider the expected aggregate effects of the many choices made by all the artificial agents of a particular type all at once? This paper looks at how artificial agents should be designed to make risky choices, and argues that the question of risky choice by artificial agents shows the moral proxy problem to be both practically relevant and difficult.