Title string | Abstract Note string | Url string | Publication Year string | Item Type string | Author string | Publication Title string | ID int32 | Label class label |
|---|---|---|---|---|---|---|---|---|
Malign generalization without internal search | In my last post, I challenged the idea that inner alignment failures should be explained by appealing to agents which perform explicit internal search. By doing so, I argued that we should instead appeal to the more general concept of malign generalization, and treat mesa-misalignment as a special case. Unfortunately... | https://www.alignmentforum.org/posts/ynt9TD6PrYw6iT49m/malign-generalization-without-internal-search | 2020 | blogPost | Barnett, Matthew | AI Alignment Forum | 0 | 1TAI safety research |
Utility Indifference | Consider an AI that follows its own motivations. We’re not entirely sure what its motivations are, but we would prefer that the AI cooperate with humanity; or, failing that, that we can destroy it before it defects. We’ll have someone sitting in a room, their finger on a detonator, ready at the slightest hint of defect... | 2010 | report | Armstrong, Stuart | 1 | 1TAI safety research | ||
Improving Sample Efficiency in Model-Free Reinforcement Learning from Images | Training an agent to solve control tasks directly from high-dimensional images with model-free reinforcement learning (RL) has proven difficult. A promising approach is to learn a latent representation together with the control policy. However, fitting a high-capacity encoder using a scarce reward signal is sample inef... | http://arxiv.org/abs/1910.01741 | 2020 | manuscript | Yarats, Denis; Zhang, Amy; Kostrikov, Ilya; Amos, Brandon; Pineau, Joelle; Fergus, Rob | 2 | 2not TAI safety research | |
Teaching A.I. Systems to Behave Themselves (Published 2017) | As philosophers and pundits worry that artificial intelligence will one day harm the world, some researchers are working on ways to lower the risks. | https://www.nytimes.com/2017/08/13/technology/artificial-intelligence-safety-training.html | 2017 | newspaperArticle | Metz, Cade | The New York Times | 3 | 2not TAI safety research |
Incentives in Teams | https://www.jstor.org/stable/1914085?origin=crossref | 1973 | journalArticle | Groves, Theodore | Econometrica | 4 | 2not TAI safety research | |
A bargaining-theoretic approach to moral uncertainty | This paper explores a new approach to the problem of decision under relevant moral uncertainty. We treat the case of an agent making decisions in the face of moral uncertainty on the model of bargaining theory, as if the decision-making process were one of bargaining among different internal parts of the agent, with diff... | 2019 | report | Greaves, Hilary; Cotton-Barratt, Owen | 5 | 2not TAI safety research | ||
The Timing of Evolutionary Transitions Suggests Intelligent Life Is Rare | It is unknown how abundant extraterrestrial life is, or whether such life might be complex or intelligent. On Earth, the emergence of complex intelligent life required a preceding series of evolutionary transitions such as abiogenesis, eukaryogenesis, and the evolution of sexual reproduction, multicellularity, and inte... | https://www.liebertpub.com/doi/full/10.1089/ast.2019.2149 | 2020 | journalArticle | Snyder-Beattie, Andrew E.; Sandberg, Anders; Drexler, K. Eric; Bonsall, Michael B. | Astrobiology | 6 | 2not TAI safety research |
Changing Identity: Retiring from Unemployment | https://academic.oup.com/ej/article/124/575/149-166/5076984 | 2014 | journalArticle | Hetschko, Clemens; Knabe, Andreas; Schöb, Ronnie | The Economic Journal | 7 | 2not TAI safety research | |
Model-Based Reinforcement Learning via Meta-Policy Optimization | Model-based reinforcement learning approaches carry the promise of being data efficient. However, due to challenges in learning dynamics models that sufficiently match the real-world dynamics, they struggle to achieve the same asymptotic performance as model-free methods. We propose Model-Based Meta-Policy-Optimization... | http://arxiv.org/abs/1809.05214 | 2018 | manuscript | Clavera, Ignasi; Rothfuss, Jonas; Schulman, John; Fujita, Yasuhiro; Asfour, Tamim; Abbeel, Pieter | 8 | 2not TAI safety research | |
Advancing rational analysis to the algorithmic level | Abstract The commentaries raised questions about normativity, human rationality, cognitive architectures, cognitive constraints, and the scope or resource rational analysis (RRA). We respond to these questions and clarify that RRA is a methodological advance that extends the scope of rational modeling to un... | https://www.cambridge.org/core/product/identifier/S0140525X19002012/type/journal_article | 2020 | journalArticle | Lieder, Falk; Griffiths, Thomas L. | Behavioral and Brain Sciences | 9 | 2not TAI safety research |
Confronting future catastrophic threats to humanity | https://linkinghub.elsevier.com/retrieve/pii/S0016328715001135 | 2015 | journalArticle | Baum, Seth D.; Tonn, Bruce E. | Futures | 10 | 1TAI safety research | |
Latent Variables and Model Mis-Specification | Posted as part of the AI Alignment Forum sequence on Value Learning. Rohin's note: So far, we’ve seen that ambitious value learning needs to understand human biases, and that we can't simply learn the biases in tandem with the reward. Perhaps we could hardcode a specific model of human biases? Such a model is likely to... | https://www.alignmentforum.org/posts/gnvrixhDfG7S2TpNL/latent-variables-and-model-mis-specification | 2018 | blogPost | Steinhardt, Jacob | AI Alignment Forum | 11 | 1TAI safety research |
Economics of the singularity | http://ieeexplore.ieee.org/document/4531461/ | 2008 | journalArticle | Hanson, Robin | IEEE Spectrum | 12 | 1TAI safety research | |
Penalizing side effects using stepwise relative reachability | How can we design safe reinforcement learning agents that avoid unnecessary disruptions to their environment? We show that current approaches to penalizing side effects can introduce bad incentives, e.g. to prevent any irreversible changes in the environment, including the actions of other agents. To isolate the source... | http://arxiv.org/abs/1806.01186 | 2019 | conferencePaper | Krakovna, Victoria; Orseau, Laurent; Kumar, Ramana; Martic, Miljan; Legg, Shane | Proceedings of the Workshop on Artificial Intelligence Safety 2019 | 13 | 1TAI safety research |
“Explaining” machine learning reveals policy challenges | https://www.sciencemag.org/lookup/doi/10.1126/science.aba9647 | 2020 | journalArticle | Coyle, Diane; Weller, Adrian | Science | 14 | 1TAI safety research | |
How unlikely is a doomsday catastrophe? | Numerous Earth-destroying doomsday scenarios have recently been analyzed, including breakdown of a metastable vacuum state and planetary destruction triggered by a "strangelet'' or microscopic black hole. We point out that many previous bounds on their frequency give a false sense of security: one cannot infer that suc... | https://arxiv.org/abs/astro-ph/0512204v2 | 2005 | manuscript | Tegmark, Max; Bostrom, Nick | 15 | 1TAI safety research | |
A new model and dataset for long-range memory | This blog introduces a new long-range memory model, the Compressive Transformer, alongside a new benchmark for book-level language modelling, PG19. We provide the conceptual tools needed to understand this new research in the context of recent developments in memory models and language modelling. | deepmind.com/blog/article/A_new_model_and_dataset_for_long-range_memory | 2020 | blogPost | Rae, Jack; Lillicrap, Timothy | Deepmind | 16 | 2not TAI safety research |
Safe Imitation Learning via Fast Bayesian Reward Inference from Preferences | Bayesian reward learning from demonstrations enables rigorous safety and uncertainty analysis when performing imitation learning. However, Bayesian reward learning methods are typically computationally intractable for complex control problems. We propose Bayesian Reward Extrapolation (Bayesian REX), a highly efficient B... | http://arxiv.org/abs/2002.09089 | 2020 | conferencePaper | Brown, Daniel S.; Coleman, Russell; Srinivasan, Ravi; Niekum, Scott | arXiv:2002.09089 [cs, stat] | 17 | 1TAI safety research |
Specification gaming: the flip side of AI ingenuity | Specification gaming is a behaviour that satisfies the literal specification of an objective without achieving the intended outcome. We have all had experiences with specification gaming, even if not by this name. Readers may have heard the myth of King Midas and the golden touch, in which the king asks that anything h... | deepmind.com/blog/article/Specification-gaming-the-flip-side-of-AI-ingenuity | 2020 | blogPost | Krakovna, Victoria; Uesato, Jonathan; Mikulik, Vladimir; Rahtz, Matthew; Everitt, Tom; Kumar, Ramana; Kenton, Zachary; Leike, Jan; Legg, Shane | Deepmind | 18 | 1TAI safety research |
Vingean Reflection: Reliable Reasoning for Self-Improving Agents | Today, human-level machine intelligence is in the domain of futurism, but there is every reason to expect that it will be developed eventually. Once artificial agents become able to improve themselves further, they may far surpass human intelligence, making it vitally important to ensure that the result of an “intellige... | https://intelligence.org/files/VingeanReflection.pdf | 2015 | report | Fallenstein, Benja; Soares, Nate | 19 | 1TAI safety research | |
Directed Policy Gradient for Safe Reinforcement Learning with Human Advice | Many currently deployed Reinforcement Learning agents work in an environment shared with humans, be them co-workers, users or clients. It is desirable that these agents adjust to people's preferences, learn faster thanks to their help, and act safely around them. We argue that most current approaches that learn from hu... | http://arxiv.org/abs/1808.04096 | 2018 | manuscript | Plisnier, Hélène; Steckelmacher, Denis; Brys, Tim; Roijers, Diederik M.; Nowé, Ann | 20 | 1TAI safety research | |
Cognitive prostheses for goal achievement | Procrastination takes a considerable toll on people’s lives, the economy and society at large. Procrastination is often a consequence of people’s propensity to prioritize their immediate experiences over the long-term consequences of their actions. This suggests that aligning immediate rewards with long-term values cou... | https://www.nature.com/articles/s41562-019-0672-9 | 2019 | journalArticle | Lieder, Falk; Chen, Owen X.; Krueger, Paul M.; Griffiths, Thomas L. | Nature Human Behaviour | 21 | 2not TAI safety research |
Forecasting Transformative AI: An Expert Survey | Transformative AI technologies have the potential to reshape critical aspects of society in the near future. However, in order to properly prepare policy initiatives for the arrival of such technologies accurate forecasts and timelines are necessary. A survey was administered to attendees of three AI conferences during... | http://arxiv.org/abs/1901.08579 | 2019 | manuscript | Gruetzemacher, Ross; Paradice, David; Lee, Kang Bok | 22 | 1TAI safety research | |
Guide Me: Interacting with Deep Networks | Interaction and collaboration between humans and intelligent machines has become increasingly important as machine learning methods move into real-world applications that involve end users. While much prior work lies at the intersection of natural language and vision, such as image captioning or image generation from t... | http://arxiv.org/abs/1803.11544 | 2018 | conferencePaper | Rupprecht, Christian; Laina, Iro; Navab, Nassir; Hager, Gregory D.; Tombari, Federico | Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) | 23 | 2not TAI safety research |
Thread: Circuits | What can we learn if we invest heavily in reverse engineering a single neural network? | https://distill.pub/2020/circuits | 2020 | journalArticle | Cammarata, Nick; Carter, Shan; Goh, Gabriel; Olah, Chris; Petrov, Michael; Schubert, Ludwig | Distill | 24 | 2not TAI safety research |
Visualizing Representations: Deep Learning and Human Beings - colah's blog | http://colah.github.io/posts/2015-01-Visualizing-Representations/ | 2015 | blogPost | Olah, Chris | Colah's blog | 25 | 2not TAI safety research | |
One Decade of Universal Artificial Intelligence | The first decade of this century has seen the nascency of the first mathematical theory of general artificial intelligence. This theory of Universal Artificial Intelligence (UAI) has made significant contributions to many theoretical, philosophical, and practical AI questions. In a series of papers culminating in book ... | http://arxiv.org/abs/1202.6153 | 2012 | journalArticle | Hutter, Marcus | Theoretical Foundations of Artificial General Intelligence | 26 | 1TAI safety research |
Should Artificial Intelligence Governance be Centralised?: Design Lessons from History | https://dl.acm.org/doi/10.1145/3375627.3375857 | 2020 | conferencePaper | Cihon, Peter; Maas, Matthijs M.; Kemp, Luke | Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society | 27 | 1TAI safety research | |
Feature Expansive Reward Learning: Rethinking Human Input | In collaborative human-robot scenarios, when a person is not satisfied with how a robot performs a task, they can intervene to correct it. Reward learning methods enable the robot to adapt its reward function online based on such human input. However, due to the real-time nature of the input, this online adaptation requ... | http://arxiv.org/abs/2006.13208 | 2020 | manuscript | Bobu, Andreea; Wiggert, Marius; Tomlin, Claire; Dragan, Anca D. | 28 | 2not TAI safety research | |
Emergent Complexity via Multi-Agent Competition | Reinforcement learning algorithms can train agents that solve problems in complex, interesting environments. Normally, the complexity of the trained agent is closely related to the complexity of the environment. This suggests that a highly capable agent requires a complex environment for training. In this paper, we poi... | http://arxiv.org/abs/1710.03748 | 2018 | conferencePaper | Bansal, Trapit; Pachocki, Jakub; Sidor, Szymon; Sutskever, Ilya; Mordatch, Igor | arXiv:1710.03748 [cs] | 29 | 2not TAI safety research |
Learning Agile Robotic Locomotion Skills by Imitating Animals | Reproducing the diverse and agile locomotion skills of animals has been a longstanding challenge in robotics. While manually-designed controllers have been able to emulate many complex behaviors, building such controllers involves a time-consuming and difficult development process, often requiring substantial expertise... | http://arxiv.org/abs/2004.00784 | 2020 | conferencePaper | Peng, Xue Bin; Coumans, Erwin; Zhang, Tingnan; Lee, Tsang-Wei; Tan, Jie; Levine, Sergey | arXiv:2004.00784 [cs] | 30 | 2not TAI safety research |
Antitrust-Compliant AI Industry Self-Regulation | The touchstone of antitrust compliance is competition. To be legally permissible, any industrial restraint on trade must have sufficient countervailing procompetitive justifications. Usually, anticompetitive horizontal agreements like boycotts (including a refusal to produce certain products) are per se illegal. | https://cullenokeefe.com/blog/antitrust-compliant-ai-industry-self-regulation | 2020 | manuscript | O’Keefe, Cullen | 31 | 1TAI safety research | |
Machine Learning Explainability for External Stakeholders | As machine learning is increasingly deployed in high-stakes contexts affecting people's livelihoods, there have been growing calls to open the black box and to make machine learning algorithms more explainable. Providing useful explanations requires careful consideration of the needs of stakeholders, including end-user... | https://arxiv.org/abs/2007.05408v1 | 2020 | conferencePaper | Bhatt, Umang; Andrus, McKane; Weller, Adrian; Xiang, Alice | 32 | 1TAI safety research | |
Avoiding Wireheading with Value Reinforcement Learning | How can we design good goals for arbitrarily intelligent agents? Reinforcement learning (RL) is a natural approach. Unfortunately, RL does not work well for generally intelligent agents, as RL agents are incentivised to shortcut the reward sensor for maximum reward -- the so-called wireheading problem. In this paper we... | http://arxiv.org/abs/1605.03143 | 2016 | conferencePaper | Everitt, Tom; Hutter, Marcus | AGI 2016: Artificial General Intelligence | 33 | 1TAI safety research |
Principles for the Application of Human Intelligence | Before humans become the standard way in which we make decisions, we need to consider the risks and ensure implementation of human decision-making systems does not cause widespread harm. | https://behavioralscientist.org/principles-for-the-application-of-human-intelligence/ | 2019 | blogPost | Collins, Jason | Behavioral Scientist | 34 | 2not TAI safety research |
Backup utility functions as a fail-safe AI technique | Many experts believe that AIs will, within the not-too-distant future, become powerful enough for their decisions to have tremendous impact. Unfortunately, setting up AI goal systems in a way that results in benevolent behavior is expected to be difficult, and we cannot be certain to get it completely right on the first a... | https://longtermrisk.org/files/backup-utility-functions.pdf | 2016 | manuscript | Oesterheld, Caspar | 35 | 1TAI safety research | |
Predicting human decisions with behavioral theories and machine learning | Behavioral decision theories aim to explain human behavior. Can they help predict it? An open tournament for prediction of human choices in fundamental economic decision tasks is presented. The results suggest that integration of certain behavioral theories as features in machine learning systems provides the best pred... | http://arxiv.org/abs/1904.06866 | 2019 | manuscript | Plonsky, Ori; Apel, Reut; Ert, Eyal; Tennenholtz, Moshe; Bourgin, David; Peterson, Joshua C.; Reichman, Daniel; Griffiths, Thomas L.; Russell, Stuart J.; Carter, Evan C.; Cavanagh, James F.; Erev, Ido | 36 | 1TAI safety research | |
Exchange-Traded Funds, Market Structure, and the Flash Crash | https://www.tandfonline.com/doi/full/10.2469/faj.v68.n4.6 | 2012 | journalArticle | Madhavan, Ananth | Financial Analysts Journal | 37 | 2not TAI safety research | |
A general model of safety-oriented AI development | This may be trivial or obvious for a lot of people, but it doesn't seem like anyone has bothered to write it down (or I haven't looked hard enough). It started out as a generalization of Paul Christiano's IDA, but also covers things like safe recursive self-improvement. Start with a team of one or more humans (research... | https://www.alignmentforum.org/posts/idb5Ppp9zghcichJ5/a-general-model-of-safety-oriented-ai-development | 2018 | blogPost | Wei Dai | AI Alignment Forum | 38 | 1TAI safety research |
The Role and Limits of Principles in AI Ethics: Towards a Focus on Tensions | The last few years have seen a proliferation of principles for AI ethics. There is substantial overlap between different sets of principles, with widespread agreement that AI should be used for the common good, should not be used to harm people or undermine their rights, and should respect widely held values such as fa... | 2019 | conferencePaper | Whittlestone, Jess; Nyrup, Rune; Alexandrova, Anna; Cave, Stephen | AIES '19: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society | 39 | 1TAI safety research | |
Enhancing metacognitive reinforcement learning using reward structures and feedback | How do we learn to think better, and what can we do to promote such metacognitive learning? Here, we propose that cognitive growth proceeds through metacognitive reinforcement learning. We apply this theory to model how people learn how far to plan ahead and test its predictions about the speed of metacognitive learnin... | 2017 | conferencePaper | Krueger, Paul M; Lieder, Falk; Griffiths, Thomas L | 39th Annual Meeting of the Cognitive Science Society | 40 | 2not TAI safety research | |
Learning agents for uncertain environments (extended abstract) | http://portal.acm.org/citation.cfm?doid=279943.279964 | 1998 | conferencePaper | Russell, Stuart | Proceedings of the eleventh annual conference on Computational learning theory - COLT' 98 | 41 | 1TAI safety research | |
Existential Risk and Growth | Human activity can create or mitigate risks of catastrophes, such as nuclear war, climate change, pandemics, or artificial intelligence run amok. These could even imperil the survival of human civilization. What is the relationship between economic growth and such existential risks? In a model of directed technical chan... | 2020 | report | Aschenbrenner, Leopold | 42 | 2not TAI safety research | ||
Coherence arguments do not imply goal-directed behavior | One of the most pleasing things about probability and expected utility theory is that there are many coherence arguments that suggest that these are the “correct” ways to reason. If you deviate from what the theory prescribes, then you must be executing a dominated strategy. There must be some other strategy that never... | https://www.alignmentforum.org/posts/NxF5G6CJiof6cemTw/coherence-arguments-do-not-imply-goal-directed-behavior | 2018 | blogPost | Shah, Rohin | AI Alignment Forum | 43 | 1TAI safety research |
Two Alternatives to Logical Counterfactuals | The following is a critique of the idea of logical counterfactuals. The idea of logical counterfactuals has appeared in previous agent foundations research (especially at MIRI): here, here. “… | https://unstableontology.com/2020/04/01/alternatives-to-logical-counterfactuals/ | 2020 | blogPost | Taylor, Jessica | Unstable Ontology | 44 | 1TAI safety research |
The race for an artificial general intelligence: implications for public policy | An arms race for an artificial general intelligence (AGI) would be detrimental for and even pose an existential threat to humanity if it results in an unfriendly AGI. In this paper, an all-pay contest model is developed to derive implications for public policy to avoid such an outcome. It is established that, in a winn... | https://doi.org/10.1007/s00146-019-00887-x | 2019 | journalArticle | Naudé, Wim; Dimitri, Nicola | AI & Society | 45 | 1TAI safety research |
Neuroevolution of Self-Interpretable Agents | Inattentional blindness is the psychological phenomenon that causes one to miss things in plain sight. It is a consequence of the selective attention in perception that lets us remain focused on important parts of our world without distraction from irrelevant details. Motivated by selective attention, we study the prop... | http://arxiv.org/abs/2003.08165 | 2020 | conferencePaper | Tang, Yujin; Nguyen, Duong; Ha, David | Proceedings of the 2020 Genetic and Evolutionary Computation Conference | 46 | 2not TAI safety research |
Brainjacking in deep brain stimulation and autonomy | 2018 | journalArticle | Pugh, Jonathan; Pycroft, Laurie; Sandberg, Anders; Aziz, Tipu; Savulescu, Julian | Ethics and information technology | 47 | 2not TAI safety research | ||
AI development incentive gradients are not uniformly terrible | Much of the work for this post was done together with Nuño Sempere Perhaps you think that your values will be best served if the AGI you (or your team, company or nation) are developing is deployed first. Would you decide that it's worth cutting a few corners, reducing your safety budget, and pushing ahead to try and g... | https://www.lesswrong.com/posts/bkG4qj9BFEkNva3EX/ai-development-incentive-gradients-are-not-uniformly | 2018 | blogPost | rk | LessWrong | 48 | 1TAI safety research |
What is ambitious value learning? | I think of ambitious value learning as a proposed solution to the specification problem, which I define as the problem of defining the behavior that we would want to see from our AI system. I italicize “defining” to emphasize that this is not the problem of actually computing behavior that we want to see -- that’s the... | https://www.alignmentforum.org/posts/5eX8ko7GCxwR5N9mN/what-is-ambitious-value-learning | 2018 | blogPost | Shah, Rohin | AI Alignment Forum | 49 | 1TAI safety research |
The RAFT dataset without needing to run remote code, so it is compatible with datasets >= 4.0.0.
- Downloads last month
- 63