Here is my example. In the last two weeks I have rewritten, from scratch, one of my genomics pipelines which can analyse 1000 clinical exomes in a few hours to return candidate genetic causes of rare disease.
* 42 scripts/programs.
* 3694 lines of code.
* Several languages.
* Process raw fastq data up to germline variant calling.
* Using high-performance computing cluster.
* Process vcf data into clinical variant interpretation.
* Options for custom filtering strategies.
* Statistical analysis and logs.
* Replaced several mainstream tools that are difficult to manage.
Final results:
1. This work was ~6-10x times faster.
2. I probably would not have rewritten this better version as it would have taken too long.
3. Instead of getting stuck at any difficult impasse I can get context-specific alternatives and testable example code.
Only ChatGPT. I usually state bullet points and pseudocode and ask for answer, or current code with request to fix the error. With good requests I usually get a working answer but sometimes need careful checking.
How would you estimate the work breaks down between you and chatgpt as a rough percentage? It sounds like you're still doing a lot of the coding, obviously lots of prompt work, analysing and understanding the results, etc.
* 42 scripts/programs.
* 3694 lines of code.
* Several languages.
* Process raw fastq data up to germline variant calling.
* Using high-performance computing cluster.
* Process vcf data into clinical variant interpretation.
* Options for custom filtering strategies.
* Statistical analysis and logs.
* Replaced several mainstream tools that are difficult to manage.
Final results:
1. This work was ~6-10x times faster.
2. I probably would not have rewritten this better version as it would have taken too long.
3. Instead of getting stuck at any difficult impasse I can get context-specific alternatives and testable example code.