Artifical Intelligence

Did you try claude.ai already?

1 Like

Do you know why the AIā€™s react faster?
They repeat what you asked. So they have enough time to look for the answer in the background!

Like human do. Trick No. 7 :smiley:

pi.ai is pretty good. The fact that donā€™t force you to have an account is way way more attractive than the others.

Excellent job :smiling_face_with_three_hearts:

The female voice of pi.ai is also very realistic and sounds very cheerful, especially when she is a little irritated. :crazy_face: :smiling_face_with_three_hearts:

1 Like

Text 2 Video is now on the next pro level!

1 Like

I was recently thinking about how to explain a neural network in simple terms and came up with an interesting discovery.

Firstly, we break down the complexity of neural networks to a minimum. The simplest formula to simulate ā€œintelligenceā€ is y = a * x

Assuming you have an x = 3 and want a to ensure that y = 2. This is easy to calculate: a = 2 / 3 = 0.666ā€¦ But we want to approximate it or train the small ā€œnetworkā€, which now only consists of one ā€œneuronā€. To do this, we first calculate the result with the random a and get 0.6, which is far away from the target (2), namely by 1.4

We use 10% as the learning coefficient, i.e. we now add 0.14 to a and have a new a = 0.34 and obtain a new y = 1.02

We repeat this again and again and get a more and more accurate approximation to the target.

The next step would be to extend the formula with further factors: y = a1x + a2x + ā€¦

This would be a series that reminded me of the Taylor series. The difference to neural networks is that the powers of x are added up here! Taylorreihe ā€“ Wikipedia

And that would be my new approach for a new type of neural network. Each strand from left to right can be seen as a chain of the Taylor series and the approximations should be much faster. If this approach works, it would mean an enormous acceleration of the AIs and thus an enormous saving of energy.

cc alias coding crusader, 28.07.24 germany

1 Like

Honestly, Iā€™m not qualified to tell if this is just plain bullshit or a pseudo-revolutionā€¦ My only question being: ā€œIf you really think this can work (and can potentially revolutionize AI), then why share it with us and in the open?ā€ Donā€™t you want a new porsche :oncoming_automobile:, yacht :speedboat: or private jet :airplane:? :joy: :rofl: On the other hand, donā€™t worry about meā€¦ I wonā€™t steal it. I have no faen clue what youā€™re talking about :grin: :dizzy_face: :joy:

quote from the video min. 4:11 ā€œā€¦und so weiter; und so fortā€ā€¦ I think this a good resume of this video and where we stand today. We are nurturing the AI with junk food and mass media and in the end we getā€¦ Well, guess what! Seriously, if you start looking into the details (not to speak about the context, style, artā€¦) this isnā€™t much more than a juniorā€™s work. Iā€™m sorry but I cannot abide this. Not just now. Not like this. And the fact that global brands even dare thinking of using it (using it alone) to create spots (although they actually didnā€™t diffuse it), because it can be made by a single person in 24h (mostly unskilled and using just text prompt) isā€¦ an offense to my kind - And an offense to humankind and all creators and engineers. Seriously, did anyone just look at the details? Whatā€™s the scope? The must have, the nice to have? Where is the emotion? If this is the formatted crap yours are willing to deliver to the next-gen, all I can say is that Iā€™m happy Iā€™m already that oldā€¦ so I wonā€™t need to be able to cope with the shit for all too long :face_with_hand_over_mouth: :grimacing:

Genius and madness go hand in hand. What if Iā€™m right? :slight_smile:

I thought you knew me better than that. I donā€™t believe in material possessions. Everything you have has you.

Neural networks are designed to approximate functions. The Taylor series does this very efficiently. From a purely logical point of view, my assumption is therefore most likely correct.

What choice does a wise old man have? Egoism, materialism, idealism, altruismā€¦ :slight_smile: In a way, we are all one. If I help others, I help myself. So altruism turns back to egoism. :slight_smile:

1 Like

It was essentially a joke, of course :laughing:

Well, I do. In a way. I love my good food and drinks, being able to see my friends and family whenever I want. I love my winter retreat in Australia. I love to have a good living, choose my friends and clients. Have a nice place to live :house_with_garden:. It does require a bit of money, but I have no need for sports cars, boats or private jets. I think Iā€™m lucky enough :four_leaf_clover: to be able to live the life I want. Hopefully, it will continue the same.

Yes, that might be true for a single person (each of us). But we are all cought in a system, arenā€™t we? Unless, you want to buy a private island and cut off from the rest of the world. Iā€™m afraid that what rules our society is rather things like: ā€˜ideologyā€™, ā€˜capitalismā€™, ā€˜risk managementā€™ or ā€˜venture politicsā€™ā€¦stuff like that. Some may sound the same, but when translated at a global level, theyā€™re nothing the same anymore.

ā€¦ just a small correction in your mindset. :slight_smile:

1 Like

@mawa ā€œYou must understand the rules in order to break them better.ā€ (Gandhi) :slight_smile:

1 Like

Thanks, for I knew this before (at least, I know this :sweat_smile:)ā€¦ And itā€™s absolutely true. Funny though that it comes from Gandhi, for he forgot to mention the second part of it (or history forgot about it :thinking:): He also claimed that having a weapon is very different from using it. He also was the one pushing for the development of nukes in India (while advocating for peace!). How do I translate that: ā€œIf you want to have only but a little chance to change the system, you need to be part of itā€ :face_with_monocle: :face_with_hand_over_mouth:

Edit: The video and explanations are really good. Thereā€™s a been quite a bit of effort put into this comprehensive video. I find it to be a real valuable reference (so thanks for sharing :hugs: :smiley:)

I wanted to check the source I am referring to in google and bard AI interferes. Hmmā€¦ Iā€™m starting to hate these smart bastards! :smiley:

Ok, it was


(link)

1 Like

Yeah, whatever. And then, who cares? Is it really important to know just which individual has been recorded in history? Iā€™m 99% sure that many people have the same kind of thoughts in a certain time and context. I believe the message is what counts and what should remain :innocent:

1 Like

The worldā€™s fastest inference.
20x faster than GPUs, 1/5 the cost.

I sometimes wish that my ideas didnā€™t meet with robust opposition that tends to nip them in the bud instead of giving them the opportunity to be scrutinised for suitability.

Me: Is it possible to use the Taylor series to create a neuronal network?

Claude.ai
Thatā€™s an interesting and creative question! While Taylor series and neural networks are both important mathematical concepts, theyā€™re not typically used together in the way youā€™re suggesting. However, letā€™s explore this idea and see how these concepts might relate.

Taylor series are used to approximate functions around a specific point, while neural networks are used for complex pattern recognition and function approximation over a wide range of inputs. Here are some key points to consider:

  1. Function approximation: Both Taylor series and neural networks can approximate functions, but in very different ways.
  2. Locality vs. globality: Taylor series provide local approximations around a specific point, while neural networks typically learn global approximations over their entire input space.
  3. Complexity: Neural networks can approximate highly complex, non-linear functions that would require an impractically large number of terms in a Taylor series.
  4. Learning: Neural networks learn from data, adjusting their parameters to minimize error. Taylor series are derived analytically from a functionā€™s derivatives at a point.
  5. Activation functions: Some research has explored using Taylor series to approximate activation functions in neural networks, but this is different from using Taylor series as the network structure itself.

While itā€™s not common to use Taylor series directly to create a neural network, there are some interesting connections between the two concepts:

  1. Universal approximation theorem: This theorem states that a neural network with a single hidden layer can approximate any continuous function on a compact subset of R^n. This is somewhat analogous to how Taylor series can approximate functions, but on a more global scale.
  2. Polynomial networks: Some researchers have explored neural networks with polynomial activation functions, which bear some similarity to Taylor series expansions.
  3. Taylor-inspired architectures: While not directly using Taylor series, some network architectures are inspired by similar principles of function approximation.

In conclusion, while Taylor series arenā€™t typically used to create neural networks, both concepts are powerful tools for function approximation. The idea of combining them is creative and might inspire interesting research directions.

Would you like me to elaborate on any of these points or explore any specific aspect of this topic further?

Me: Write me an example code in js

ā€¦

2 Likes

Tips to make better use of the limited contingent. applies to all AIs

In order to get the most value out of Claude Pro:

  • Start a new conversation via the ā€œAIā€ icon in the top left corner when you want to discuss a new topic. Claude needs to re-read the entire conversation every time you send a message, so starting a new conversation will limit the amount of work Claude needs to do to continue the conversation. Bonus: Claude will also respond faster to shorter conversations.

  • Ask multiple questions at once, especially if you are asking Claude questions about a long document. Because Claude needs to re-read the conversation each time you send a new message, asking questions in individual messages would use up your limit faster than a single message with multiple questions in it.

  • Donā€™t re-upload files. Claude sees the entire context from your conversation, so you donā€™t need to upload the same file multiple times (unless you start a new conversation).

Source: About Claude Pro usage | Anthropic Help Center

1 Like

And what happened next? Are you now holding the base of the next-gen AI :money_mouth_face:? Or did you run out of credits :joy: