Maybe in the future :)
Thanks! Unfortunately, it wasn’t recorded. I’ll keep that in mind about Bazel, thank you.
Here you can find the slides: speakerdeck.com/thomasvitale...github.com/ThomasVitale....
Java is a modern and powerful technology stack, making it an excellent choice for developing cloud-native applications. Spring, built on top of Java, offers a broad ecosystem of frameworks and librari...
5. Overenthusiastic LLM Use is put on Hold. LLMs might not be the right solution to some problems. For example, some sentiment analysis and classification problems "can be solved more cheaply and easily using traditional natural language processing (NLP)". www.thoughtworks.com/radar/techni...
In the rush to leverage the latest in AI, many organizations are quickly adopting large language models (LLMs) for a variety of applications, from content [...]
4. Ollama is set to Assess. It's an open-source tool to run and manage LLMs on local environments, useful for development, testing (check out the Testcontainers Ollama module), and for running inference services on-prem. www.thoughtworks.com/radar/tools/...
Ollama is an open-source tool for running and managing large language models (LLMs) on your local machine. Previously, we talked about the benefits of self-hosted [...]
3. Rush to Fine-Tune LLMs is put on Hold since in the vast majority of cases where an LLM should be made aware of specific knowledge, "using a form of retrieval-augmented generation (RAG) offers a better solution and a better cost-benefit ratio". www.thoughtworks.com/radar/techni...
As organizations are looking for ways to make large language models (LLMs) work in the context of their product, domain or organizational knowledge, we're seeing [...]
2. Retrieval Augmented Generation (RAG) is set to Adopt as the preferred way "to improve the quality of responses generated by a large language model (LLM)". www.thoughtworks.com/radar/techni...
Retrieval-augmented generation (RAG) is the preferred pattern for our teams to improve the quality of responses generated by a large language model (LLM). We’ve successfully [...]
And if you want to know more about the book, check out this episode of the GOTO Book Club where I had a chance to interview @salaboy.comwww.youtube.com/watch?v=Wp8h...
This interview was recorded for the GOTO Book Club. #GOTOcon #GOTObookclubhttp://gotopia.tech/bookclubRead the full transcription of the interview here:https...