Google Bard is a stupid name so I will forever call it Google Bart. Much more appropriate.
Eric Swildens’ Post
More Relevant Posts
-
Finally got into udio.com, they've been overloaded for a while. I've used Suno a lot to create music. Suno's been down a bit recently as they had to upgrade their infrastructure to support demand. Udio is pretty good. Try it out if you get a chance, it is free right now and it just went live this week. It will probably crash to due to demand again in the near term but maybe you'll get lucky. It tends to make up its own lyrics randomly even when I feed it my own to sing so there are some improvements to make but this is just a V1. Honestly, after playing it with for a few hours, Suno does a better job matching what I want from a song. Here's an example from something I made with Suno. I made the lyrics for this and it generated just about exactly what I want. https://lnkd.in/gd4rTTUt When I tried to do the same with Udio, it just didn't give me what I was looking for and I tended to get "moderation error" messages when trying to make something better. But when Udio hits it right, it does hit pretty good. Here are links to some songs I liked from Udio, all fully generated by Udio and not by me, and a link to Udio itself. https://lnkd.in/gsUDcDwr https://lnkd.in/gHCfM_Tm https://lnkd.in/gaCs5dhF
To view or add a comment, sign in
-
The Transparent Self This was written without any AI; kindly forgive any grammar mistakes or typos. When I was growing up, there was a big focus on books and only a limited number of TV stations (live) since we did not have the Internet. Despite the constraints of these more restricted and slower means of communication, during the 60s through 80s there were subjects that would capture the nation's collective zeitgeist. In the 70s, for instance, there was a period where books and television focused on UFOs. Chariots of the Gods (1970) and Leonard Nemoy's "In Search Of.." ruled the day for a time. Back in the 60s, there was a real focus on psychology for a number of years, both children's and adult, featuring figures such as Sigmund Freud, B.F. Skinner, Carl Jung, Timothy Leary and others. This psychology-focus came along with a lot of introspection, in terms of people looking at their own behavior. In tune with that, there are two books from that era that resonated with me and that I have revisited a number of times over the years. My favorite is: The Transparent Self by Signey M. Jourard PhD (1964) It is an somewhat easy read but does tend to harp on the same points as you read on. If interested, you could probably read the first few chapters and then skim the rest over time. The second book which was a best seller in 1964 and that I have also found myself referencing in life is: Games People Play: The Psychology of Human Relationships by Eric Berne M.D. (1964) Notice my lack of Amazon affiliate links 🙄 I recommend both of those books if you are looking for something to read other than something on the Internet, maybe in a nice quiet place. Of course, I'm sure they are available on the Internet as well.
To view or add a comment, sign in
-
Google launched Gemini last Thursday. "Gemini" is a decent name, unlike "Bard", which was terrible. Gemini deserves more praise than it has received. It is a remarkable advancement that uses a Mixture of Experts (MoE) architecture. MoE seems like the best approach and mimics the human brain's structure more closely. The concept of MoE dates back to 1991, but these are novel applications of it. Gemini's announcement should have made more headlines, but it was eclipsed by the text-to-video model that OpenAI released, which was also incredibly impressive. OpenAI should also get more praise and recognition for that work. It is hard to keep up with the pace of innovation. AI is evolving so quickly thanks to so many brilliant engineers. The methods I would have used to process data (vector database/embeddings) just last week have changed due to the releases that just came out. I expect my work with robotics and generative models to be surpassed by some other group's breakthrough in the next few months, or even sooner. This is such an exciting time for engineering and science. It must be similar to how it would have felt in the 1800s when electromagnetism was discovered. Also, if you haven't checked out suno.ai for music, absolutely take a few minutes to check it out.
To view or add a comment, sign in
-
Why do GPT AI bots give different, conflicting answers to the same question and why do they hallucinate? The current GPT AI bot engines create responses by predicting, guessing, and outputting the next word or “word part” to write. For example, if we ask it "Tell me about Eric Swildens," the possibilities for the next word might be "Eric" at 95% and "He" at 5%. The bot then rolls dice to pick one and, let's say, outputs "Eric." Then it predicts the next word, and let's say it guesses that the options are "Swildens" at 99% and "Johnson" at 1%. Why "Johnson"? Because the corpus of information it loaded in has Johnson affiliated with Eric Swildens for some reason. It rolls the dice and picks "Swildens." It then does the same for the next two words, and we have "Eric Swildens is a." Now, let's look at the next word. I'm in technology, but I have a relative who has a fashion brand in Paris, France. They are not named Eric, but the corpus of information loaded by the GPT model doesn't have too much information about me. If it were someone else, it would have a lot more data about them, so it would have a better guess at the next word. For me, it might have possibilities for the next word being "technology" at 90% and "French" at 10%. Let's say it picks "French." It just messed up because I'm not French. It goes on to predict the next word, but now it has a French Eric Swildens, not me. So, it goes with that. If I were French, there is a Swildens that is a fashion designer, so given "French," it has next word options of "fashion" at 80% and "technology" at 20%. It picks "fashion," and then it goes from there but now it has an Eric that is in French fashion so it is off the rails and keeps going. That's why it hallucinates answers. It was all because it chose "French" as the 5th word and went from there. If it would have chosen “technology” as the 5th word, it would have probably been OK. It gives different answers because it rolls dice to pick the next word and doesn’t use a deterministic random number generator throughout the process for each run. There are technical reasons why it does that, and you can watch a great YouTube video by Stephan Wolfram on how GPT works if you want to know other details. Wolfram's videos, in general, are excellent. What does this all mean? It means when the GPT model doesn’t have that much training information about a subject, it has a good chance of getting things wrong and once it starts out wrong, it goes with it. Additionally, the models do inference from their trained data so if that data is bad, the output will be bad. Does this tell us anything about ourselves and how we may think? We are predicting the next word when we talk and our next move when we act. We aren't the same as these neural models but we do rhyme with them. It is something to consider.
To view or add a comment, sign in
-
Another interesting thing about ChatGPT is how, if you correct it, it can agree it was wrong. It is still wrong about what Max Planck thought about free will, though.
To view or add a comment, sign in