Interview with Singularity Institute Research Fellow Luke Muehlhauser

This idea probably just comes from looking at the Blue Brain project that seems to be aiming in the direction of WBE and uses an expensive supercomputer for simulating models of neocortical columns... right, Luke? :)

(I guess because we'd like to see WBE come before AI, due to creating FAI being a hell of a lot more difficult than ensuring a few (hundred) WBEs behave at least humanly friendly and thereby be of some use in making progress on FAI itself.)

[SEQ RERUN] Occam's Razor

Perhaps the following review article can be of some help here: A Philosophical Treatise of Universal Induction

Singularity Institute Strategic Plan 2011

Time to level-up then, eh? :)

(Just sticking to my plan of trying to encourage people for this kind of work.)

410yOr such problems are not Normal Anomaly's comparative advantage, and her time is
actually better spent on other things. :P

Singularity Institute Strategic Plan 2011

Before resorting to 'large financial prizes', shouldn't level 1 include 'formalize open problems and publicise them'?

The trouble is, 'formalizing open problems' seems like by far the toughest part here, and it would thus be nice if we could employ collaborative problem-solving to somehow crack this part of the problem... by formalizing how to formalize various confusing FAI-related subproblems and throwing this on MathOverflow? :) Actually, I think LW is more appropriate environment for at least attempting this endeavor, since it is, after all, what a large part of Eliezer's sequences tried to prepare us for...

Singularity Institute Strategic Plan 2011

...open problems

youintend to work on.

You mean we? :)

...and we can start by trying to make a list like this, which is actually a pretty hard and important problem all by itself.

410yI said "you" because I don't see myself as competent to work on decision
theory-type problems.

Help Fund Lukeprog at SIAI

I wonder if anyone here shares my hesitation to donate (only a small amount, since I unfortunately can't afford anything bigger) due to thinking along the lines of "let's see, if I donate a 100$, that may buy a few meals in the States, especially CA, but on the other hand, if I keep them, I can live ~2/3 of a month on that and since I also (aspire to) work on FAI-related issues, isn't this a better way to spend the little money I have?"

But anyway, since even the smallest donations matter (tax laws an' all that, if I'm not mistaken) and -5$ isn't going to kill me, I've just made this tiny donation...

IntelligenceExplosion.com

What is the difference between the ideas of recursive self-improvement and intelligence explosion?

They sometimes get used interchangeably, but I'm not sure they actually refer to the same thing. It wouldn't hurt if you could clarify this somewhere, I guess.

Hanson Debating Yudkowsky, Jun 2011

How about a LW poll regarding this issue?

(Is there some new way to make one, since the site redesign, or are we still at vote-up-down-karma-balance pattern?)

SIAI’s Short-Term Research Program

Even if we presume to know how to build an AI, figuring out the Friendly part still seems to be a long way off. Some AI building plans or/and architectures (e.g. evolutionary methods) are also totally useless F-wise, even though they may lead to a general AI.

What we actually need is knowledge about how to build a very specific type of an AI, and unfortunately, it appears that the A(G)I (sub)field with it's "anything that works" attitude isn't going to provide one.

110yCorrect!

Advice for AI makers

Well, this can actually be done (yes, in Prolog with a few metaprogramming tricks), and it's not really that hard - only very inefficient, i.e. feasible only for relatively small problems. See: Inductive logic programming.

010yNo, not learning. And the 'do nothing else' parts can't be left out.
This shouldn't be a general automatic programing method, just something that
goes through the motions of solving this one problem. It should already 'know'
whatever principles lead to that solution. The outcome should be obvious to the
programmer, and I suspect realistically hand-traceable. My goal is a solid
understanding of a toy program exactly one meta-level above hanoi.
This does seem like something Prolog could do well, if there is already a static
program that does this I'd love to see it.

Meanings of Mathematical Truths

This kind of thinking (that I actually quite like) presupposes the existence of abstract mathematical facts, but I'm at loss figuring out in what part of the territory are they stored, and if this is a wrong question to ask, what precisely does it make it so.

Does postulating their existence buy us anything else than TDT's nice decision procedure?

111yI can infer stuff about mathematical facts, and some of the facts are morally
relevant. Whether they "exist" is of unclear relevance.

Meanings of Mathematical Truths

Talking or reasoning *about* the uncomputable isn't the same as "computing" the uncomputable. The first may very well be computable while the second obviously isn't.

-211yObviously, but I didn't mean to imply that it was. My question is what we are
actually reasoning about when we are reasoning about the uncomputable. Apologies
if this wasn't clear.
It seems to me that oracle computations and accelerated Turing machines, seem to
be related to counterfactual reasoning in the sense that they suppose things
that are not the case such as Galilean velocity addition or that we can obtain
the result of a non-halting computation in finite time and use that to compute
further results.

Meanings of Mathematical Truths

Logic is useful because it produces true beliefs.

I'd rather say it *conserves* true beliefs that were put into the system at the start, but these were, in turn, produced inductively.

[math statements] would be just as valid in any other universe

I've often heard this bit of conventional wisdom but I'm not totally convinced it's actually true. How would we even know?

Well, what if in some other universe every process isomorphic to a statement "2 + 2" concludes that it equals "3" instead of "4" - would this mean that the abstr... (read more)

111yPerhaps to say "valid in any other universe" is a language mismatch, since
validity doesn't refer to universes, only to the rules of the system.
If it concluded 3 instead of 4, it would not be isomorphic to our "2+2". Both
systems of mathematics (in our and the other universes) have to be isomorphic as
a whole to enable translation between them.

Meanings of Mathematical Truths

"Controlled by abstract fact" as in Controlling Constant Programs idea?

Since this notion of our brains being calculators of - and thereby providing evidence about - certain abstract mathematical facts seems transplanted from Eliezer's metaethics, I wonder if there are any important differences between these two ideas, i.e. between trying to answer "5324 + 2326 = ?" and "is X really the right thing to do?".

In other words, are moral questions just a subset of math living in a particularly complex formal system (our "moral f... (read more)

011yThat post gives a toy model, but the objects of acausal control can be more
general: they are not necessarily natural numbers (computed by some program),
they can be other kinds of mathematical structures; their definitions don't have
to come in the form of programs that compute them, or syntactic axioms; we might
have no formal definitions that are themselves understood, so that we can only
use the definitions, without knowing how they work; and we might even have no
ability to refer to the exact object of control, and instead work with an
approximation.
Also, consider that even when you talk of "explicit" tools for accessing
abstract facts, such as apples or calculators or theorems written of a sheet of
paper, these tools are still immensely complicated physical objects, and what
you mean by saying that they are simple is that you understand how they are
related to simple abstract descriptions, which you understand. So the only
difference between understanding what a "real number" is "directly", and
pointing to an axiomatic definition, is that you are comfortable with intuitive
understanding of "axiomatic definition", but it too is an abstract idea that you
could further describe using another level of axiomatic definition. And in fact
considering syntactic tools as mathematical structures led to generalizing from
finite logical statements to infinite ones, see infinitary logic
[http://en.wikipedia.org/wiki/Infinitary_logic].

The Multiverse Interpretation of Quantum Mechanics [link]

Sean Carroll has a nice explanation of the general idea on his blog that falls somewhere between newscientist's uninformativeness and the usual layman-inaccessible arxiv article.

[SEQ RERUN] Inductive Bias

Probability Theory:The Logic of Science?

The Urgent Meta-Ethics of Friendly Artificial Intelligence

Perhaps he's refering to the part of CEV that says "extrapolated as we wish that extrapolated, interpreted as we wish that interpreted". Even logical coherence becomes in this way a focus of extrapolation dynamics, and if this criterion should be changed to something else - as judged by the whole of our extrapolated morality in a strange-loopy way - well, so be it. The dynamics should reflect on itself and consider the foundational assumptions it was built upon, including the compelingness of basic logic we are currently so certain about - and of... (read more)

Eliezer to speak in Oxford, 8pm Jan 25th

...and he also already had a talk at Oxford these days at FHI's Winter Intelligence Conference: http://www.fhi.ox.ac.uk/events_data/winter_conference

Videos should be available soon, and I must admit I'm a bit more eager to hear what he and others had to say at this gathering than what his forthcoming talk shall bring us...

111yI don't think there's anything in there that readers of the Sequences will be
unfamiliar with.

311yAh, yes, I was hoping that there'd be videos of those too. Glad to hear they
should be available soon; I'm looking forward to seeing his and some of the
others.

Best career models for doing research?

Being in a similar position (also as far as aversion to moving to e.g. US is concerned), I decided to work part time (roughly 1/5 of the time of even less) in software industry and spend the remainder of the day studying relevant literature, leveling up etc. for working on the FAI problem. Since I'm not quite out of the university system yet, I'm also trying to build some connections with our AI lab staff and a few other interested people in the academia, but with no intention to actually join their show. It would eat away almost all my time, so I could wo... (read more)

He might have changed his mind till now, but in case you missed it: Recommended Reading for Friendly AI Research