How to generate Javadoc (and other types of) Documentation – while you are drinking a coffee? Use GPT Chat with whatever LLM Model.
Understanding the Weak places of AI
At the current state – most AI systems have a pretrained set of information with some historical data build into the Language Model. If you ask for some super – recent event – you are most likely goint to get – I don’t know.
Update the information in LLM GPTs
There is a terms called Context Window – when you chat with a GPT and it is taking into account what you most recently asked. Otherwise – it gets back to the Database Snapchot. Althouth – there are tools to update the Large Database Information. I just found a Git Repository that extracts in LLM friendly way some source code: https://github.com/yamadashy/repopack. The big AI providers most probably have something similar. Otherwise – they will need to re-run the machine learning anew – that may takes months, a lot of eletricity and additional fine tuning. One of the fine tuning is probably – the filtering of information that may be considered hate, harmful or pornographic. This is specially true with the latest improvements with audio and visual content.
Javadoc Documentation with LLM GPT
What you need to create a javadoc documentation is a language model that allows large enough amount of tokens (symbols/words) – as a request. Once you find such – the LM will get into YOUR Code – and figure out what it is about. It will most probably not need to know
- Any data that is super up-to-date
- (Optionally) – To have been teached on your database.
- I’ve personally used it to create the Javadoc and the documentation of https://github.com/tomavelev/java_bootstrap_vaadin_components, that I am using in the most recent personal Micro Service Projects. https://programtom.com/dev/product-category/technologies/spring-boot-framework/?orderby=date