<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>http://mw.hh.se/wg211/index.php?action=history&amp;feed=atom&amp;title=WG211%2FM7Name</id>
	<title>WG211/M7Name - Revision history</title>
	<link rel="self" type="application/atom+xml" href="http://mw.hh.se/wg211/index.php?action=history&amp;feed=atom&amp;title=WG211%2FM7Name"/>
	<link rel="alternate" type="text/html" href="http://mw.hh.se/wg211/index.php?title=WG211/M7Name&amp;action=history"/>
	<updated>2026-04-05T23:01:07Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.43.5</generator>
	<entry>
		<id>http://mw.hh.se/wg211/index.php?title=WG211/M7Name&amp;diff=235&amp;oldid=prev</id>
		<title>Admin: 1 revision</title>
		<link rel="alternate" type="text/html" href="http://mw.hh.se/wg211/index.php?title=WG211/M7Name&amp;diff=235&amp;oldid=prev"/>
		<updated>2011-12-12T10:06:27Z</updated>

		<summary type="html">&lt;p&gt;1 revision&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;[[Category:WG211]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Theory of mind and bounded rationality without interpretive overhead===&lt;br /&gt;
Oleg Kiselyov and Chung-chieh Shan&lt;br /&gt;
&lt;br /&gt;
Computers and humans that work well together have beliefs about each&lt;br /&gt;
other&amp;#039;s intentions, about each other&amp;#039;s desires about each other&amp;#039;s&lt;br /&gt;
beliefs, and so on.  To practise such a _theory of mind_, agents need&lt;br /&gt;
to slip easily into each other&amp;#039;s shoes.  Ideally, when Agent A reasons&lt;br /&gt;
about Agent B with complete certainty, Agent A should simulate Agent B&amp;#039;s&lt;br /&gt;
mind as efficiently as if that simulation were reality.  Modeling agents&lt;br /&gt;
as programs, we want Agent A to interpret Agent B&amp;#039;s program _without&lt;br /&gt;
interpretive overhead_, that is, as efficiently as if that program ran&lt;br /&gt;
directly.  A programming language with _delimited control operators_&lt;br /&gt;
lets us eliminate interpretive overhead in a computational model of&lt;br /&gt;
bounded-rational agents that reason about each other probabilistically.&lt;br /&gt;
The key is to reify stochastic programs as probability distributions&lt;br /&gt;
using the increasingly popular _finally tagless_ technique for embedding&lt;br /&gt;
programming languages.  We demonstrate the idea with a simplistic model&lt;br /&gt;
of plausibly deniable bribing.&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
</feed>