Jump to content

Western Civilization’s Last Stand

The Art of The Argument

Available Now | artoftheargument.com

Freedomain Radio Amazon Affiliate Links: United States - Canada - United Kingdom

Sign up for the Freedomain Mailing List: fdrurl.com/newsletter


  • Content count

  • Joined

  • Last visited

Community Reputation

-7 Poor

About Kikker

Recent Profile Visitors

414 profile views
  1. Advice For Hyper Intellectuals

    MysterionMuffles talks about intellectual capabilities, not wisdom. At the same time high intellectual capability (and believing you have it) could blind you from good ideas other people might have. Smart people might think that any ideas they come up with are better than any ideas their less smart peers have simply because they are more intellectually capable overall. In other words an overestimation of their superiority. A tribe can perfectly exist of smart people only....
  2. Who is "we"? And what is the relationship between "we" and "someone"?
  3. It's cheaper, just not as cheap as it sounds. I tried to answer this question some time ago, but then went to watch those movies to see what you're on about. The thing is, I didn't return to this forum afterwards. () So first thing to keep in mind is that neural nets are heavily inspired by the brain, so it's already designed to emulate a "known" inertia system, the one we operate on. Though several complexities of human brains aren't simulated and neural nets don't have nearly as many neurons as a human (even the big ones like in the video). Given that, those mini-abstractions I talked about is simply a way to describe what happens in a neural net. With images you can to a certain extend trace back what every layer gets in and get a nice little picture of it like below. But what actually happens isn't restricted by the way people try to conceptualize the inner workings of a net. Simply put the abstractions I mentioned are meant for you (and me), to get some idea. An actual neural network can manipulate data in all kinds of ways which don't conform to the way I described there. Also, when I talked about the accuracy of those abstractions I actually just meant their usefulness in doing what the neural net is supposed to do. I assume that some kind of abstractions take place and that their usefulness determines the outcome. Edges to features to faces.
  4. I don't know if you got stuck on saying that "speech is necessary for thought" (or speech is a way of thinking) is the same as saying "speech is the same as thought" or that you fundamentally don't agree on the relationship between thought and communication described by Jordan?
  5. It's a lot of time spend keeping up to date with progress in artificial intelligence if it's not your field. The only thing I'm wary (and annoyed by) is people taking offense. For example the debacle that played out here. Well that requires significant pre-processing. aka an algorithm that cuts the relevant pieces out of a larger image (how can you determine that something is relevant?) and then use humans for the final training. Which is probably based on consensus. So you would need a trained network trained in finding data to be labeled, which would be a far simpler network nonetheless. Ah I didn't mention that . Consider a neural net of 4 layers, each layer has 4000 neurons. Then each neuron has at least 4000 connections and the middle layers (2,3) have 8000. So yes a neural net can have neurons with thousands of connections. But more importantly neurons involved in the early stages of human vision have significantly less connections and have a certain connection structure, that structure was translated to neurons in a neural net. That was the inspired part. I don't understand how my statement suggested something about unknown parameters. I simply mentioned that more pixels (better resolution) costs more computing power. Mmm... So I consider abstractions at a much lower level than you refer to here. A abstraction from a sharp contrast in an image to a edge (of something) is for me already an abstractions, pile a bunch of those mini-abstractions together (edges to corners for example) and you can get something useful (like a square).
  6. Since there seems that nobody is going to answer this I will attempt an alternative answer. Firstly I'm not an expert. Secondly I assume you already saw a bunch of vaccine specific information sources so I would recommend this video to get a sense of how your immune system works. Don't skip the boring parts or rewatch if it's to fast. It is fundamentally important to get a sense of how your immune system works in the first place to reliably judge information as probably or utter nonsense (which is the most common distinction a non-expert has to make ). I say this because your assertion about chromosomes being altered by an injection of (presumably) cells and your estimation of that article being probable makes me believe you don't understand something really fundamental about your body, before taking vaccines into consideration. If you find me patronizing then you shouldn't put "Please don't laugh" in your post, suggesting that your arguments are normally met with ridicule.
  7. Europe was the birthplace of mankind

    That title is incredibly misleading, the creature discovered was in fact what we consider an ape. It still walked on four feet and has, presuming it's in fact our ancestor, multiple ape and humanoid branches in the more recent past. The most important thing is that our species (homo sapiens) originated form Africa 7 millions years later (200.000 years ago). Which makes Africa still the "birthplace" of mankind, unchanged by these findings.
  8. Yes and no. What you're describing isn't the state of the art anymore, since 2011. You can with relative ease make a neural network to recognize a chair. There are two main breakthroughs since we were able to do that. Firstly a chair is a single true or false question, so it's still manageable to make a traditional (fully connected) neural network and use it to recognize just chairs. If you would like to see other objects in the images you either have to make more networks or expand the one currently recognizing chairs to also recognize other objects. The latter increases training time dramatically. Secondly you would need a data-set of manually labeled pictures with chairs in it, not just a few hundred, you need tens of thousands of manually labeled pictures to get good result. These two problems are solved with the introduction of convolutional networks (inspired by the actual neurons humans use) and adversarial networks. Convolutional networks significantly reduce the computing cost and increase effectiveness of the initial stage of the network. In this initial stage the image is compressed to a standard input and low-level features like edges are extracted. Another property is that the filters used by convolutional networks are local (a specific part of the image). In the an adversarial network we actually make sure that some abstraction takes place and since it's just two networks figuring out those abstractions we don't need to label the data anymore. Also note that earlier in a traditional neural network we don't care what abstractions are made by the algorithm. We just need the correct output (also we don't know what abstractions are made, if any). Now however we can directly see the accuracy of those abstractions in the generated image. Without the convolutional layers of that network the generated image would also be very blurry (higher resolution would be computational unfeasible), but with convolutional layers a crisp image can be generated. In the video above every object has some kind of abstraction and some kind of transformation to the other weather conditions. So though you don't strictly need this kind of abstraction when simply recognizing a chair, it's definitely needed in the case above. --- Looking back, it's for me a given that these kinds of networks need to make some kind of abstraction at some level in order to function. But maybe it isn't for you, I hope you can get some intuition from my explanation.
  9. You sound like an advertisement because you keep presenting its advantages without actually explaining how the algorithm works and just citing videos (of one hour!) and the overview paper isn't gonna cut it. In other words your confidence in the advantages of the algorithm isn't reflected in the knowledge you display. I mean, do you know why the bitcoin is limited to and average of 4* transactions per second? How is it avoided in Hashgraph? Basic questions left unanswered! Why don't you give a short summary of it's inner workings if you already did the research? Besides that, it looks interesting, it seemingly has it's roots in computational science instead of cryptology like block-chain. Though in the video it is very casually mentioned that the network can be disrupted when the hackers hold 33.4% of the nodes (though control is 66.8% of the nodes) while block-chain has a hard limit of 51% for both. Which sounds significant to me, especially since you don't have a constant push for computational proofs like in bitcoin meaning that inflating your control should be easier. * a block is 2400 transactions and the algorithm controlling bitcoin keeps an average of 1 block every 10 minutes (2400/(10*60) = 4 per second)
  10. Well, to explain the ai thing properly. You have two adversarial networks, an adversarial network is a network with two competing neural networks, one tries to produce fake images which look like real images and one tries to distinguish real images from fake images. The result is a network which is capable of producing very realistic images. So two adversarial networks, in this case, one for winter one for summer, which can produce realistic fake images of both domains. Problem is the translation from and to one of the domains to the other one. In the paper they used an implementation with variational encoders. Variational encoders are used to produce a interface between fake image generation and humans inputting variables. To go one more layer in depth: a adversarial network can produce fake images using random noise therefore each value of random noise corresponds to certain feature in the image. Problem is those variables aren't exactly meaningful for our understanding and trying out every single one of them to map the features which are meaningful is impossible. So you use variational encoders to make use of a Gaussian distribution assumption for each variable. The assumption being that features we find meaningful have a normal distribution in the feature space. The research groups uses two decoders and two encoders and the assumption of a shared latent space to map a summer (or day) image to a winter (or night) image. So you have a encoder for summer and a encoder for winter and you place restriction (shared-latent space assumption) on them that they should encode to the same space. Then the two decoders can take the encoded image from either of the encoders (which is the actual breakthrough) and then produce a winter or a summer image. This shared latent space could of course be extended to any kind of weather though it would add significantly to the computing cost.
  11. AIs create new language

    The gains from information compression are trivial compared to the elimination of ambiguity which natural languages have. For example a noun can have several meanings, but also a meaning can also have several nouns. Figuring this out is computationally expensive. Besides the meaning of words, even grammar is ambiguous and the only way we can make decent parsers is through the statistical approach where we need (at least) tens of thousands of annotated sentences in order to train a parser to interpret sentences (~75% accuracy) like a human would. So the problem is that there is information required to understand an English sentence, which isn't encoded in the sentence itself, nor in the grammar rules, but is only apparent by observing the language as it is in use. You can easily test and measure the information exchanged between two entities without knowing what they communicated, provided you can communicate with them separately. If the agents hold memory then you can tell one agent a fact and see how that information is transferred to the other agent. In the case of a chatbot it may be that a chatbot A trained with a human data set or interaction and will cause behavior changes to another chatbot B when they're interacting with eachother.
  12. Imaginary things

    Since it comes up on this forum a lot in some way or the other, I decided to make a topic about it. Mainly people seem disagree about the difference between real and unreal things. It seems fairly simple to distinguish a unreal being like Santa from a real being like the pope. But that's not what the disagreement is about, it seems to me that the disagreement is about imaginary orders, whether they are real or not and what that means. To begin with, an organization is imaginary and to give an example; a government. When searching for something physical to represent a government you will find nothing there. Every single building or person part of a government can't represent it but more importantly when they're all gone a government can still exist as an imaginary organization. That means government is not real right? Well, it depends, if you just mean that it has no single physical form to point to then yes. But not real doesn't mean it's not important nor easily discarded. A myth in the minds of millions of people becomes real because they will act according to that myth. It's actually observable, if you get a Birdseye view of a government you would be able to see certain behavioral patterns emerge from people believing in a government. Same principle for imaginary orders. Orders being overarching ideas about the workings of the world, for example: Christianity but also communism. The interesting part of imaginary orders is that religions and ideologies have the same origins, they just argue whether the world order is supernatural or natural. But you could also go closer to reality. Someone on this forum said that a squirrel could see the white house but could never see the government, thus making the government imaginary. The problem is though, that a squirrel can't see the white house and identify it as the white house. It probably can't identify a house in general, making that imaginary too. I would argue that those concepts are indeed imaginary. Let's take a better example, a chair; there is no collection of features you can use the describe all the chairs in the world without classifying a bunch of non-chairs as chairs. However, there can be a decision tree describing every chair in the world without any mis-classification. Better yet, you probably have a similar thing constructed in your brain. Very small animals don't have the ability to do this kind of classification while bigger animals don't have any incentive to waste brain capacity on the classification of chairs. Nevertheless it doesn't make the concept of chairs less imaginary, while any instantiation of chair is very real. Example of a simple decision tree. Notice how the feature Cone has different implications depending on the previous feature(s) identified.
  13. What proof is there of the conscience?

    Are you saying a moral system is encoded into your DNA? Are you saying a moral system isn't encoded into your DNA?
  14. Why IQ?

    It's very hard to find such relations. Maybe you can look for studies about certain activities being too simple or too complicated for a person and how it affects productivity and/or happiness. Also you could frame your question differently by asking whether high IQ diversity is beneficial to a society, since a higher average IQ generally means a society can do more stuff more quickly with the same amount of people.
  15. Because an anarcho-capitalist society doesn't rule out organizations capable of violence but does rule out the organization currently holding monopoly of force?

Important Information

By using this site, you agree to our Terms of Use.