LLM Key Issues and Applications

Key LLM Issues

•Misinformation – wrong / biased answers, can be because of insufficient prompt engineering.

•Intrinsic Hallucination – fabricated contents.

•Extrinsic Hallucination – unverified content.

Hallucination can be reduced by increasing training data, involving contextual references, RAG.

LLMs are non-deterministic : same prompt different times can give different answers.

Please read this paper to learn more about it, https://arxiv.org/pdf/2311.05232.pdf

Things GenAI Can Do

•It can summarize a lot of information.

•Can perform better with an augmented context – RAG (Retrieval Augmented Generation)

•Can be a great coding assistant.

•Can be supported by other software processes (Backend API communication and services for better performance)

•Can be a great source to augment your own knowledge base to users.

Things GenAI Can’t Do

•Can’t even be certain about anything – Hallucination problem

•It can not figure out new things on its own (RAG helps but, it doesn’t alter underlying model itself)

•Can’t be certain if another content was created by Gen AI or not.

•Can’t properly cite it’s own source of information. (Cited source can be hallucinated, RAG can be used as cited source)

•GenAI can’t take your job yet. Has no way to proactively learn for itself.

LLM Applications

•Writing Assistance – technical, creative writing, general editing, documentation, programming.

•Information Retrieval – search engine support, conversational recommendation, document summarize, text interpretations.

•Commercial – customer support, machine translation, automation of workflow / knowledge tests, business management, medical diagnosis.

•Individual Use – productivity support, Q&A, brainstorming, education, problem solving.

Don’t forget!
GenAI is not the only AI!

Leave a comment