Jump to content

Western Civilization’s Last Stand

The Art of The Argument

Available Now | artoftheargument.com

Freedomain Radio Amazon Affiliate Links: United States - Canada - United Kingdom

Sign up for the Freedomain Mailing List: fdrurl.com/newsletter

Kikker

Member
  • Content count

    113
  • Joined

  • Last visited

Community Reputation

-7 Poor

About Kikker

  1. Why IQ?

    It's very hard to find such relations. Maybe you can look for studies about certain activities being too simple or too complicated for a person and how it affects productivity and/or happiness. Also you could frame your question differently by asking whether high IQ diversity is beneficial to a society, since a higher average IQ generally means a society can do more stuff more quickly with the same amount of people.
  2. Because an anarcho-capitalist society doesn't rule out organizations capable of violence but does rule out the organization currently holding monopoly of force?
  3. Seems his premise was that lack of monopoly of force would result in competition for the monopoly of force by organizations capable of violence. Hence there would be more violence without a monopoly of force.
  4. Good vs Evil - Not a choice

    Yet your only way of knowing intent is to look at the consequences of someone's actions..... You for example provide consequences of an the global elitist's actions to argue their evil intent, so what's stopping you from doing the same with someone killing someone?
  5. Good vs Evil - Not a choice

    You're already inconsistent with your definition of evil, is it intention or consequence? Or do we just assume evil intent when consequences aren't in everyone's interest? Isn't that convenient? How you can think your incoherent mess can add anything to "the age old debate" is beyond me.
  6. Driving Improves...

    From a "science" journal with no link to the actual paper ofcourse.... If there is a paper to link to that is.
  7. GM mosquitos

    That link is dead btw. But you're probably talking about mosquitoes which are modified to stop specific (disease carrying) mosquito populations from reproducing? What exactly are the dangers of that?
  8. Just noticed I was a bit obscure, I meant that mercury pollution generally increases with the new lamps. I'm just speculating because I'm not going to research the issue in detail. But I think you can imagine a push from Germany for a greener Europa, abolishing the import tariffs. Then when they decide the market was changing too slowly (probably estimated at decades) because of the already established brands they decided to campaign in favor of the new lamp. Add a little lobby money from established overseas companies (or maybe domestic looking to destroy competition) and you get a situation where report after report is produced which outlines the advantages of the new lamps, occasionally obscuring negatives. Public opinion changes, a few major companies which produce both lamps are happy to switch completely for compensation (probably a few bribes) and the path is free to ban the normal lamp. In short, it fits the green agenda and government action hastens (maybe causes) the transition pretty well in this case.
  9. They where 175% more expensive but about 600-800% more durable, meaning it's cheaper in the long run. The color spectrum is experienced as less pleasant but it should achieve any brightness a normal bulb can after a few seconds. It's energy efficiency is from 40 watt to 8 watt meaning it's 500% more efficient. The mercury is potentially dangerous. The argument is that it compensates it's own mercury pollution by reducing the need for energy plants which in the case of coal produces mercury pollution. The total mercury pollution should be 200% or so more if your country has a lot of other energy sources. But you failed to mention the import tariffs raised on Compact fluorescent lamps until 2008 under the normal light bulb lobby to protect the European Market. You just experienced a power switch in an government agency.
  10. Artificial vs Natural Intelligence

    Backpropagation introduces the concept of self-correction instead of environmental correction. It is contradictory to genetic algorithms, something can't be a genetic algorithm and a neural network at the same time. Unless you have an population of self-correcting routines encapsulated in a genetic algorithm. You misunderstand. Fastest subroutine to finish doesn't equal subroutine which actually does something. You only explain how a already working subroutine gets faster not how you get a working subroutine in the first place or how it rules out unnecessary actions. If your estimate is so rough that it can differ by dozens of magnitude, you should just argue that the actual number is unknown. I assumed you where comparing neurons to transistors from here: Or do I need to explain that each layer of transistors could half the switches per second of the whole thing? Your a good sport you know, first slander my arguments and then demand I don't refute you.
  11. Artificial vs Natural Intelligence

    I don't understand how you see this process as a genetic algorithm... There is no combination of genetic traits between subroutines to make new ones, no killing of unfit subroutines and you don't even try to explain how random mutations come into being and how the fastest subroutine is also evaluated as the best one to do a particular thing. Before trying to fit a square into a round hole, please just read up on neural networks and backpropegation which more closely resembles your own description on how the brain works. I don't understand where yo get your 400 billion from, even if it's possible, observations I googled (I don't have the time to spend a workweek understanding precisely what scientist agree on) all come down to 20-1000 switches per second average across the brain (link your article if you don't agree). That's not even near 400 billion. More importantly a transistor can't hold even remotely the amount of information to represent a neuron. You would need at least a transistor for the neuron itself and one for every connection it has (average of a 1000) to other neurons, which is still wildly inaccurate. Your estimate needs some serious adjustment.
  12. Artificial vs Natural Intelligence

    I'll put the other arguments on hold for a post or two. Of all the things you don't explain you choose this core idea you hold... What is this base level you're talking about? How are they almost identical? What exactly is the theory of how the brain works?
  13. Artificial vs Natural Intelligence

    I admit the phrasing was sloppy. But if you keep insisting I meant X = X and X = X+1 then you're assuming I'm an idiot beforehand. Just to be clear what I meant was: you could have a goalstate(x) but you could also have a goalstate(y) which has a subgoal(x). Since you generally don't know subgoal(x) it's potentially dangerous. I agree that the brain structure is essentially* encoded in DNA. But if you learn about physics your DNA doesn't change after every bit of knowledge you gain. Your structure has enabled you to learn it. So AI researchers don't have to recreate the process that produced a human structure after 4 billion years, you could just reverse-engineer the governing principles of the current brain structure in order to get a human level AI. * It's mathematically impossible to encode every neuron in dna so it's probably a certain growing pattern which has different outcomes if environmental factors change, most importantly during pregnancy. I didn't say that, to adjust your analogies: "A rock being heavier than a bullet doesn't imply bullet level dangerousness." "A car having more mass that the H-bom doesn't imply it having H-bom level deadliness." The very fact that you had to change the sentence structure completely to disprove an analogy should ring a bell in your head. I'm not sure what your syllogism clarifies as it can be used to argue any property of a human to be the same for an Ai. I assume you don't want to argue that. Humans want to take over the world Humans have legs AI wants to take over the world. Therefore it is implied AI have legs. Human legs are strictly related to human biology. Human biology implies desires among other things. Therefore it is implied that an AI that wants to take over the world has legs. To be specific the condition that the desire to take over the world requires human intelligence isn't believable. You can make it observe a (simulated) situation which resembles a world taken over and turn it into a goal state. And like I said before a world taken over could be a hidden sub-goal of a goal-state which we had programmed in. And no I do not know any action of myself that doesn't have a evolutionary foundation behind it. I'm not sure what an evolutionary biological motive is if it's different from an evolutionary foundation..
  14. Artificial vs Natural Intelligence

    The latest breakthroughs (convolutional neural networks, AlphaGo, netflix algorithm) don't make use of a genetic algorithm. There even have been serious doubts about the efficiency of genetic algorithms in it's original form. I don't know what you think they figured out in those papers but genetic algorithm are generally used to generate connection structures while backpropagation is used to optimize the parameters. To analogue to humans, most of our intelligence isn't encoded in our DNA, rather we learn it throughout our lives through an progress we haven't figured out. You don't have to recreate the genetic process in order to recreate that learning ability. I don't know what semantic game you're playing but anything can be a goal state and anything can be a sub-goal state. If that wasn't clear by me using world domination as both a goals state and a sub-goals state I apologize. World domination could be the easiest path to accomplice the goal "continue humanities existence" for example. Your a and b difference is a bit weird since the difference isn't very clear. The structure in which a desire can develop is most likely programmed otherwise the structure is written by a program which is in turn programmed (etc.). I don't understand how both things are mutually exclusive. Here is an explanation how an cooperative ai might be developed. When talking about human level intelligence I assumed it was same learning ability and same skill ability. That doesn't imply same moral system or same desires. To put it differently: a bears ability to defeat a human in close combat doesn't imply human level intellect regarding close combat. An Ai's ability to take over the world doesn't imply human level intellect regarding anything except it's ability to take over the world. Wrong post?
  15. Artificial vs Natural Intelligence

    (paraphrasing an example from the Master algorithm) Imagine you're trying to cure cancer, you would need to have a way to interrupt the process of cancer cells while leaving the other cells unharmed. In order to do that you need an accurate model of the cells workings, precise enough to identify weak-points unique to cancer cells. Also cancer cells have many different mutations which causes them meaning you'll need to be able to individually identify the cancer within each patient. In theory you could accomplish this by creating a Markov model (let's just say it's a spinoff from Bayesian models) which explores each possible state and calculates the probabilities of different state transitions. You of-course need to manually note down each state or specify observations in a way that states could be inferred. You'll also need an algorithm to manipulate the states, add possible logical statements (hypotheses) and then the Markov model will check if they're likely or not. You then have an algorithm which can model the workings of that cell. In the real world though we can't even see all the states and workings of a cell to begin with making the state plane incomplete. A Markov model can't handle that and you'll need a way to reduce the noise (or a better observer). Something doesn't need the desire to take over the world in order to actually take over the world. A goal state could have a requirement to take over the world. Besides, most goal states (such as taking over to world) are too complex to formulate manually. You'll need to let the AI observe a goal state, infer the conditions behind it and then let it do it's thing. Furthermore the main point is not that sky-net happens but more that once an AI exists which can outsmart most of mankind we wouldn't be able to stop it once it set out to do whatever we wanted it to do. A faulty goal state could then cause serious damage. Which papers?
×

Important Information

By using this site, you agree to our Terms of Use.