Two ideas for learning from LLMs are circulating, signaling a shift from curiosity to usable workflows [1]. On the memory and personality side, a personal AI called Roampal learns who you are and what actually works for you, running locally with Ollama and a multi-layer memory system. It’s open source (MIT) and designed to let the AI manage its own memory patterns while you keep override control [2].
Learning and memory on your terms — People are exploring AI that adapts to you, not the other way around. The Roampal setup emphasizes a memory bank, books, and a knowledge-graph approach that scores outcomes and surfaces what actually helps you, all without cloud-traveling data [2].
What to do when you don’t know what to do with an LLM — A mid‑century note from a long-form piece offers three paths: Creative Path (the LLM as collaborator in writing and world-building), Technical Path (building, refactoring, and explaining systems), and Reflective Path (using the model to map your own reasoning). The takeaway: guide the model, don’t command it; treat each prompt as a design decision [3].
Writer-focused model debates — A writer’s thread pits options like Qwen 70B and Mixtral 8x22B as engines for scriptwork and life-info integration with a RAG setup. The dialogue underscores context limits and the practicalities of feeding personal texts to steer style and intelligence [4].
Creative experiments in real time — In a project called Synthasia, a team trains a MIDI generator to compose on the fly for a dynamic text-adventure soundtrack. The process unfolds in five stages, with Stage 4 tying language prompts to music via an encoder, all tracked in public progress videos. The milestone uses Gemini 2.5 Pro for orchestration work and evaluation [5].
Bottom line: people are combining on-device memory, creative collaboration, and live-composition workflows to turn LLMs from curiosities into everyday tools [1][2][3][4][5].
References
Two Ideas for Humans Learning from LLMs
Proposes two ideas for how humans can learn from large language models and leverage them in education and beyond today
View sourceI built a personal AI that learns who you are and what actually works for you
User builds local personal AI memory with multiple LLMs; compares, tunes autonomy, seeks feedback on practical LLM integration and effectiveness.
View sourceHave access to the LLM but don't know what to do with it ....
Post questions practical paths with LLM access; comments discuss resources, use cases, and skepticism about value of the tool today
View sourceAs a writer - which model would be better?
A writer weighs writer-focused LLMs (Qwen, Mixtral, Gemma, Mistral) with hardware limits and prompt context for scripting and integration ideas.
View sourceMy LLM-powered text adventure needed a dynamic soundtrack, so I'm training a MIDI generation model to compose it on the fly. Here's a video of its progress so far.
Describes using multiple LLMs to build dynamic open-world text adventure, pairing with MIDI generator; stages, prompts, local small models approach.
View source