5 ways to stay intelligent if I used Gen Ai, explained by computer science professors

There is an old saying in Journalism Business: If your mom is telling you, she loves you, see it. The point is that you have to make skeptical also skeptical, even of your trust, too. But what if, instead of your mother, it’s a generative AI model as Openai’s Openai tell you something? Should you trust the computer?

The Takeaway of a conversation that gives by a pair of carnegie Melron University Computer skills in the southwest of this week? No. See it out.

This week, Austin, Texas, conference has been credited artificial artistry intelligence. Experts discussed the future and the big picture, with conversations on trust, the changing jobs and more. Sherry Wu and Marte Sap, Assistant Prosgie Mellegie Melllon University of Computer Received, and now, and now the Eye-Chat Striving, like AI language Strategy.

“They’re Iron just out perfect and not the fact that the money you want to,” Safa Say.

Here are five pieces of advice than more intelligent than AI.

Delete about what you want

Anyone who is a joke is falling flat on a social media sites like Twitter or Bluesy will tell you how hard it is sarccasm in the text. And the posters on this site (at least the human) knowing socially insists if you do not be literally. A llm not.

In the gate a sham high high full lists we consider more than half the time, sam and they were furious with social government!

The solution, Wich said is more specific and structured and structured with your props. Make sure the model knows what you asking it to produce. Focus on what exactly you want, and don’t assume that ILM is not extrapolated.

Bots are confident but not right

Maybe the biggest issue with generative AI tools is that they are hallucinate, means they do things. Sap said halluBUrier comes up to a teal, with higher prices and more specialized areas such as law and medicine.

The problem is going to beyond just things to get wrong. Sap said chatboots can affirmly affliction on a response when they are completely wrong.

“This lets humans gluenuageable to trust this expression of security if the model is wrong,” he said.

The solution for this is easy: see the answers of the llm. You can pokes his bush of herself pointer, by asking, by aspiving the same money or garments on the same question. You can see different expenses. “Sometimes you will see that the model doesn’t really know what it says,” she said.

The most important thing is to verify external sources. That means also that you should be careful about asking questions that you don’t know the answer. Wu said generative ai answers are most useful if they are on a topic that you are familiar so you would say that you don’t say that’s real and what is not.

“Owing evokes decisions when I trust in a model and if not to” said. “Don’t trust a model if it tells you, it’s very confident.”

AI can’t keep a secret

The privacy concerns with llms are upheld. It is releasing information going information that you will not want to see on the internet on a machine that is regulate to anyone who can strain itself. Sap said a demonstration with Openai’s Openai’s Omitai’s Open, asking to organize a surprise business

Llms are not good at the reason to know who should know what and what private should private should, “he said.

Do not share sensitive or personal data with a llm said that.

“If you produce any all of you of you and the model, always double check if you don’t want anything you don’t want to go out to the lmm,” said it.

Remember, you are talking to a machine

Chatbots partially captured because of how well are human human speech. But it’s all mimic; It’s not really human, SAP said. Models say things like “I’m asking” I’ll ask you ahead “because they are trained on the language that includes these words, not because they contained a fantasy. “The way we use language ,, these words are all that any unsafeated thing” “Sapon.” It implies that the language model things introduces that it has introduced an internal world. “

Thinking of Ai Models as the man can be dangerous, it may result in false trust. WILL isn’t involved that humans lead to them and treat them as if, if humles soles sit, can say.

“Humans are much more likely to overlook the bulky mixing or consciousness to AI systems,” he said.

Using a llm cannot make sense

In spite of claims more in Daft below are facials of advanced researching and a research that they just always have been told. Benchmarks that can suggest a model at level of a human being with a ph.d. However, they will only be loadmen, always here are no model characterized, will be work on which it receive the what you want to use it.

“There is this ittusion of the robustness of AI Tapositions around these leads that leading the outstanding decisions in their business,” he said.

If you don’t use the case a generative ai model or are not used, as well as goods and potential shuls and what is potential damage.


#ways #stay #intelligent #Gen #explained #computer #science #professors

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top