I did an experiment in generating material with NotebookLM to improve my understanding.
NotebookLM (Google NotebookLM; LM short for “Language Model”) is a research and note-taking online tool developed by Google Labs that uses artificial intelligence (AI), specifically Google Gemini, to assist users in interacting with their documents.
I’m relatively sceptical about AI in these matters. I think AI showed a typical result here: The broad “general” informations were maybe 65% correct, the more specific informations (including everything you might want to know how to connect things to things) were utterly wrong. This bot connected “clock” to encoders and put the I2C lines to 5V and this was enough to know for me.
Hi Hannes, agreed, we need to be sceptical. On the other hand it did manage to distill quite a few gotchas in the visuals and audio. Also, this was based on generating without extra human input, which could have produced better results.
I’d be honestly interested in the gotchas. But I’d weight them against the almost certain fatal damage to the machine when sticking to these suggestions.
This is a very interesting approach to user familarisation and training. I just finished listening to the 20 minute episode and it is mostly very accurate, provides some great info and is fairly easy to listen to. One must try to avoid being distracted by knowing the two voices are not really human and worse… ignore how much priase @wyleu gets! (Much soup for you tonight.)
I think we could possibly use this to create some curated introduction tutorials. This one has a great intro about what zynthian is and a section on building plus some sections on customisation and diy ethos. I would like to see those in seperate podcasts. @jawn can you provide some info on how you prompted NotebookLM to produce this? Maybe we could use it (or similar) to create specific, targetted content. Are we able to adjust the output, e.g. correct errors or misconceptions?
Does anyone know the licensing aroung the output of such a technology? Are we free to create media and distribute it?
There may also be a way to hook in a LLM to zynthian’s workflow recorder to generate a description of some demonstration.