Did you try claude.ai already?
Do you know why the AIās react faster?
They repeat what you asked. So they have enough time to look for the answer in the background!
Like human do. Trick No. 7
pi.ai is pretty good. The fact that donāt force you to have an account is way way more attractive than the others.
Excellent job
The female voice of pi.ai is also very realistic and sounds very cheerful, especially when she is a little irritated.
I was recently thinking about how to explain a neural network in simple terms and came up with an interesting discovery.
Firstly, we break down the complexity of neural networks to a minimum. The simplest formula to simulate āintelligenceā is y = a * x
Assuming you have an x = 3 and want a to ensure that y = 2. This is easy to calculate: a = 2 / 3 = 0.666ā¦ But we want to approximate it or train the small ānetworkā, which now only consists of one āneuronā. To do this, we first calculate the result with the random a and get 0.6, which is far away from the target (2), namely by 1.4
We use 10% as the learning coefficient, i.e. we now add 0.14 to a and have a new a = 0.34 and obtain a new y = 1.02
We repeat this again and again and get a more and more accurate approximation to the target.
The next step would be to extend the formula with further factors: y = a1x + a2x + ā¦
This would be a series that reminded me of the Taylor series. The difference to neural networks is that the powers of x are added up here! Taylorreihe ā Wikipedia
And that would be my new approach for a new type of neural network. Each strand from left to right can be seen as a chain of the Taylor series and the approximations should be much faster. If this approach works, it would mean an enormous acceleration of the AIs and thus an enormous saving of energy.
cc alias coding crusader, 28.07.24 germany
Honestly, Iām not qualified to tell if this is just plain bullshit or a pseudo-revolutionā¦ My only question being: āIf you really think this can work (and can potentially revolutionize AI), then why share it with us and in the open?ā Donāt you want a new porsche , yacht or private jet ? On the other hand, donāt worry about meā¦ I wonāt steal it. I have no faen clue what youāre talking about
quote from the video min. 4:11 āā¦und so weiter; und so fortāā¦ I think this a good resume of this video and where we stand today. We are nurturing the AI with junk food and mass media and in the end we getā¦ Well, guess what! Seriously, if you start looking into the details (not to speak about the context, style, artā¦) this isnāt much more than a juniorās work. Iām sorry but I cannot abide this. Not just now. Not like this. And the fact that global brands even dare thinking of using it (using it alone) to create spots (although they actually didnāt diffuse it), because it can be made by a single person in 24h (mostly unskilled and using just text prompt) isā¦ an offense to my kind - And an offense to humankind and all creators and engineers. Seriously, did anyone just look at the details? Whatās the scope? The must have, the nice to have? Where is the emotion? If this is the formatted crap yours are willing to deliver to the next-gen, all I can say is that Iām happy Iām already that oldā¦ so I wonāt need to be able to cope with the shit for all too long
Genius and madness go hand in hand. What if Iām right?
I thought you knew me better than that. I donāt believe in material possessions. Everything you have has you.
Neural networks are designed to approximate functions. The Taylor series does this very efficiently. From a purely logical point of view, my assumption is therefore most likely correct.
What choice does a wise old man have? Egoism, materialism, idealism, altruismā¦ In a way, we are all one. If I help others, I help myself. So altruism turns back to egoism.
It was essentially a joke, of course
Well, I do. In a way. I love my good food and drinks, being able to see my friends and family whenever I want. I love my winter retreat in Australia. I love to have a good living, choose my friends and clients. Have a nice place to live . It does require a bit of money, but I have no need for sports cars, boats or private jets. I think Iām lucky enough to be able to live the life I want. Hopefully, it will continue the same.
Yes, that might be true for a single person (each of us). But we are all cought in a system, arenāt we? Unless, you want to buy a private island and cut off from the rest of the world. Iām afraid that what rules our society is rather things like: āideologyā, ācapitalismā, ārisk managementā or āventure politicsāā¦stuff like that. Some may sound the same, but when translated at a global level, theyāre nothing the same anymore.
ā¦ just a small correction in your mindset.
@mawa āYou must understand the rules in order to break them better.ā (Gandhi)
Thanks, for I knew this before (at least, I know this )ā¦ And itās absolutely true. Funny though that it comes from Gandhi, for he forgot to mention the second part of it (or history forgot about it ): He also claimed that having a weapon is very different from using it. He also was the one pushing for the development of nukes in India (while advocating for peace!). How do I translate that: āIf you want to have only but a little chance to change the system, you need to be part of itā
Edit: The video and explanations are really good. Thereās a been quite a bit of effort put into this comprehensive video. I find it to be a real valuable reference (so thanks for sharing )
I wanted to check the source I am referring to in google and bard AI interferes. Hmmā¦ Iām starting to hate these smart bastards!
Ok, it was
(link)
Yeah, whatever. And then, who cares? Is it really important to know just which individual has been recorded in history? Iām 99% sure that many people have the same kind of thoughts in a certain time and context. I believe the message is what counts and what should remain
The worldās fastest inference.
20x faster than GPUs, 1/5 the cost.
I sometimes wish that my ideas didnāt meet with robust opposition that tends to nip them in the bud instead of giving them the opportunity to be scrutinised for suitability.
Me: Is it possible to use the Taylor series to create a neuronal network?
Claude.ai
Thatās an interesting and creative question! While Taylor series and neural networks are both important mathematical concepts, theyāre not typically used together in the way youāre suggesting. However, letās explore this idea and see how these concepts might relate.
Taylor series are used to approximate functions around a specific point, while neural networks are used for complex pattern recognition and function approximation over a wide range of inputs. Here are some key points to consider:
- Function approximation: Both Taylor series and neural networks can approximate functions, but in very different ways.
- Locality vs. globality: Taylor series provide local approximations around a specific point, while neural networks typically learn global approximations over their entire input space.
- Complexity: Neural networks can approximate highly complex, non-linear functions that would require an impractically large number of terms in a Taylor series.
- Learning: Neural networks learn from data, adjusting their parameters to minimize error. Taylor series are derived analytically from a functionās derivatives at a point.
- Activation functions: Some research has explored using Taylor series to approximate activation functions in neural networks, but this is different from using Taylor series as the network structure itself.
While itās not common to use Taylor series directly to create a neural network, there are some interesting connections between the two concepts:
- Universal approximation theorem: This theorem states that a neural network with a single hidden layer can approximate any continuous function on a compact subset of R^n. This is somewhat analogous to how Taylor series can approximate functions, but on a more global scale.
- Polynomial networks: Some researchers have explored neural networks with polynomial activation functions, which bear some similarity to Taylor series expansions.
- Taylor-inspired architectures: While not directly using Taylor series, some network architectures are inspired by similar principles of function approximation.
In conclusion, while Taylor series arenāt typically used to create neural networks, both concepts are powerful tools for function approximation. The idea of combining them is creative and might inspire interesting research directions.
Would you like me to elaborate on any of these points or explore any specific aspect of this topic further?
Me: Write me an example code in js
ā¦
Tips to make better use of the limited contingent. applies to all AIs
In order to get the most value out of Claude Pro:
-
Start a new conversation via the āAIā icon in the top left corner when you want to discuss a new topic. Claude needs to re-read the entire conversation every time you send a message, so starting a new conversation will limit the amount of work Claude needs to do to continue the conversation. Bonus: Claude will also respond faster to shorter conversations.
-
Ask multiple questions at once, especially if you are asking Claude questions about a long document. Because Claude needs to re-read the conversation each time you send a new message, asking questions in individual messages would use up your limit faster than a single message with multiple questions in it.
-
Donāt re-upload files. Claude sees the entire context from your conversation, so you donāt need to upload the same file multiple times (unless you start a new conversation).
And what happened next? Are you now holding the base of the next-gen AI ? Or did you run out of credits