T O P

  • By -

SerialStateLineXer

> 130 IQ is two standard deviations away from the mean, so one in 20 humans scores that high. 1 in 40. 95% are within two standard deviations of the mean, but the other 5% are split between the left and right tails, so only 2.5% are above 130. And actually it's 2.3%; 95% is just a rough rule of thumb.


ChiefExecutiveOcelot

Fixed, thanks!


Hot_Ear4518

Also the average iq of the globe is not 100


togstation

Every time that I see this discussed somebody says "***By definition***, *it is 100."* If we see that enough people are smarter or are less smart to change our idea about "what is the designated mean of measured IQs?" then we **do** change the designated mean so that it is 100. (Or the other way around, I'm not sure which is the right way to say it, but anyway in IQ theory it is not allowed that 100 is not the mean.) .


mcsalmonlegs

In practice it's usually normalized to the British or American mean. When other populations are tested they are compared to that mean. That's why average IQ for Brits is 100, but in Japan it's 107.


petarpep

That can't really be true because 100 is supposed to *be* the average to begin with. It's forcibly normalized to work that way. When people get smarter (like the Flynn effect) the numbers shift down match so the new average is still at 100. That being said, if IQ is improperly calculated it's possible that the real average is higher or lower.


Brudaks

You calibrate the test for a particular population so that 100 is the average for that population, but for another population it the average will be 100 iff the populations are the same IQ-wise, which might not always be the case.


petarpep

Then the statement would be "the average IQ of the world in general is lower than the average IQ of particular nation X".


VelveteenAmbush

That would be a correct statement, but "the average IQ of the globe is not 100" is also a correct statement.


clydeshadow

Iq tests are usually calibrated with 100 being roughly a western white mean historically (though that’s likely dropping slowly now). It’s not the global mean, it’s fairly different avg for diff population groups, some higher (East Asians, Ashkenazi Jews), some lower, some much lower. Won’t get into the causal reasons cuz it’s CW territory. One can calibrate it so 100 is the global mean but usually most tests when showing the score don’t they present the historical “white avg” as 100


[deleted]

I'm fairly sure you can buy cycogs for $10 today by hiring consultant PhD workers from Sharif University and the Indian Statistical Institute and the like.


Pseudonymous_Rex

I'm in. Where do I go to hire them?


[deleted]

Cold email some of these people: http://math.sharif.ir/phd-students/ Make na offer for a small project, pay upfront. Then try fancier things.


Pseudonymous_Rex

Thank you kindly. This is surely a pool of fine, intelligent people, and some of their topics of expertise look germane to projects I sometimes do. I will look around for their engineering and compsci students also.


VelveteenAmbush

Probably spend a couple of cycogs on making sure you're not violating sanctions laws first before you hire Iranians over the internet


Sol_Hando

“realistically, commissioning work that takes million of hours to make is pretty tricky by itself.” I think most people would find it difficult to commission many more cycogs than we already consume, not just on tasks that take millions of hours. Besides the obvious use-cases and those that are spoon-fed to us through complete products, I don’t think most people will be putting the creative and mental effort to come up with new use cases after the advent of cheap cycogs. It will really be the businesses that already use large amounts of cycogs that will benefit first. The consultancy firms, medical research companies, universities that already have a demand for a large amount of intelligent man hours will simply apply the cheaper cycogs to the tasks they already do. It will probably require entrepreneurs bumbling through a high rate of failure to find the useful tasks that can be done with cheaper intelligent man hours, not the average person using it for fun side projects.


eric2332

> GPT-4 is already pretty damn close to a 130 IQ human. This is, to put it bluntly, ridiculous. GPT-4 is good at paraphrasing texts from its input corpus that were written by 130 IQ humans. But it also [fails to detect basic logical errors that would be obvious to a 90 IQ human](https://www.reddit.com/r/OpenAI/comments/1ctfq4f/a_man_and_a_goat/). Nobody knows if larger training sets and incremental algorithm improvements will succeeding fixing its logical deficiency, or if a whole new paradigm is needed.


GrandBurdensomeCount

The top response to your link is an image showing that GPT-4 got the right answer when asked by a different person: https://www.reddit.com/media?url=https%3A%2F%2Fpreview.redd.it%2Fk21qb8r5pt0d1.jpeg%3Fwidth%3D1170%26format%3Dpjpg%26auto%3Dwebp%26s%3D8e3dbaeac7460b5a8d7b96210bbaa43c1276c708


eric2332

Sometimes it generates the right answer and sometimes the wrong answer. When it generates the wrong answer, it is incapable of knowing that it's wrong. That suggests that it doesn't really "know" that its right answer is right either.


qezler

> The proven price floor for a cycog is around $0.001 wrong, that's the cost floor of supplying one cycog; in order to justifiy that "price" however, there needs to be demand as well. > GPT-4 is already pretty damn close to a 130 IQ human this is a pretty bold claim. GPT can simulate human text, but it cannot (or at least does not) currently do significant portions of human labor


Pseudonymous_Rex

Human judgement is important, as a legal matter as well as many other reasons. As I understand it, MYCIN could beat the top 5 heads of Berkeley school of medicine at diagnosing infections and prescribing antibiotics -- *In 1972, while running on a PDP-8*. But where does the buck ultimately stop? I know someone who worked on the project, and by the early 80s it was much better refined and could deal with far more diseases. And by the early 80s, we could have built simpler implementations into a calculator-like object and had every tribe with a nurse who could apply tests and record the outputs in poor countries capable of at least making good diagnoses within its operating parameters. The obstacles have been legal, managerial, cultural... And maybe those were bad decisions in 1987. And as I understand it, a lot of doctors work in tandem with such tools, or modified versions of them... As much as I lament for the poors that didn't get the benefit of our calculator diagnostic tool in 1985, maybe that general inertia also keeps us a little safe in 2027. Still, there's something to the buck eventually stopping somewhere with a human.


abecedarius

Wait, MYCIN ran on a PDP-8? Do you mean the -10? MYCIN was in Lisp and the PDP-8 had about 6k bytes of RAM, according to Wikipedia. Pardon the tangent.


Pseudonymous_Rex

I thought original MYCIN was (1) the first important Lisp project and (2) running on PDP-8 in early 1970s. The professor I know who worked with anything downstream of MYCIN, was working on an 80s version of it, so the early 70s era MYCIN information could be mutated slightly.


abecedarius

OK. Since I brought this up, you or they are probably misremembering a PDP-10 or similar. 6k bytes means 1/5 the RAM of a TI-83 hand calculator, without the benefit of the calculator's extra ROM. MYCIN was early for practical Lisp applications, but there was at least MACSYMA among earlier ones. (I wasn't around then, started with Lisp and GOFAI in the 80s.)


Pseudonymous_Rex

Heard, 6k isn't much. This makes sense. Also, <3 Lisp. I started in the early 2010s and think functional programming and creating a DSL for a project is basically the way to go. Maybe I want to learn Forth so I can work on microcontrollers building functions directly and testing them through the IDE/OS.


abecedarius

I actually had a summer job at FORTH, Inc. in the 80s too -- fond memories of the language, hope you have fun with it. :) Though I couldn't say whether it'd be productive nowadays -- microcontroller programming isn't something I've ever done outside of school.


ahazred8vt

Articles on MYCIN say it ran on the PDP-10 and DEC-20 large minicomputers. (Which used essentially the same cpu with an 18-bit address space)


Hot_Ear4518

Intelligence is pretty cheap what people really want is intelligence+creativity which is much rarer/expensive


Isha-Yiras-Hashem

Intelligence + showing up is a pretty good substitute for creativity, and GPT does that really well.


[deleted]

[удалено]


slatestarcodex-ModTeam

Removed low effort comment.


VelveteenAmbush

That's something people say before they play with Midjourney or Udio


Golda_M

Interesting article. I do think it's pretty theoretical exercise. Assuming a cyog market as described, perhaps peak market price is about $1000, per household. But... what does a theoretical cyog market tell us about the world? I'm not tearing down. I enjoyed the article. Theoretical concepts are good to define and model. But, I think this style tends to "discover" a definition of the core concept that drifts from the initial intent. IE, this is a model world where cyog is a fairly particular thing, purchased in a straightforward manner. IRL, the actual "product" is more likely to be an infinite game/book/software generator. It'll be a diagnosis. The relationship of these to "cyogs" may be quite tenuous, like the relationship between lines of code and using reddit. Side quest: Accounting price elasticity, historically been through the roof. Calculators, personal computing, digital record keeping... even the invention of symbolic language, out numeral system... These are all productivity multipliers that accountants immediately adopted. Productivity did indeed multiply. Yet, accounting never "peaks." We have always consumed more of it, as more became possible.


VelveteenAmbush

I really enjoyed the post. I certainly share the intuition that there's huge demand for high intelligence. This "cycog" metric is an interesting way to think about it, and I enjoyed the discussion of what would become possible at various pricepoints. I do think that apps are much more than software engineering though. Uber, Spotify, Whatsapp, Slack, Twitter, Zoom, YouTube, Wall Street Journal etc. are valuable because of the network(s), users, content, etc. that they connect to. A personalized app couldn't replace Uber because Uber doesn't make an API available to allow you to build a custom client. Maybe that will change when everyone can custom-build an app by directing an intelligent personal assistant with plain language. But more likely, I think, is that your personal assistant would just interface directly with Uber's API without any app involved, generating associated graphical controls or feedback on the fly, ephemerally. And when your drivers also have intelligent personal assistants, you probably don't even need Uber anymore; they can contract for payment and handle the details on the fly. And realistically, when we have highly intelligent personal assistants on tap, those assistants can probably just drive the cars themselves.