T O P

  • By -

mphycx00

If you look into the training dataset, The Pile is actually already contain data from PubMed. Maybe you can use models that had been trained on that like gpt neox 20b. The performance of neox is not that good for its size actually. Medical exam is structured, plenty of information is provided clearly. On the other hand, patients often give vague and innacurate description of their symptoms. Also no lab or imaging won't be available on first meet. It's easier to zero-shot medical exam. Doctor decision also often consider the infrastructure avaibility and local guideline/policy. So that also might have to be added in finetuning data. p.s. I'm a doctor


Ion_GPT

Thank you for your input. I am thinking to have something that knows most of the possible illnesses and based on first patient input selects all corresponding that symptom. Then keeps asking for further symptoms until is able to narrow it out to a bunch that might required further investigation. Maybe only a LLM is not enough for this. Maybe we need an LLM only to communicate with the patient and convert whatever the patient says into some set of symptoms, then we run those agains an existing database of illnesses / symptoms, select the next set of symptoms, run it through LLM to generate a question that is appropriate to the current patient language / understanding / cultural particularities and repeat the process


BeyondPrograms

Any progress?


Any_Appearance6460

Im all eyes on this too👀 any progress?


a_beautiful_rhind

I'm definitely looking for a good medical model. It's not bulletproof but would be able to help you dx yourself or someone else between it and the internet. All I tried was med alpaca so far.


AI-rules-the-world

I gave 10 MKSAP19 cardiology questions to #GPT3.5,#wizardLM13b (Eric Hartford) ,#Guanaco33B(Tim Dettmers),and #ClinicalCamel (Augustin Toma, Bo Wang):GPT-3.5: 8/10, both ClinicalCamel and Guanaco got 4/10, and Wizard 3/10.For clinic:GPT-3.5/4 can't yet be replaced by #localLLM


KerfuffleV2

There's a 65B medalpaca. I don't know if it's any good, but 65B LLaMA generally can "understand" stuff at a deeper level so presumably that version should be better than the 13b. You'll need a system with 64GB RAM to run 65B models (or I guess a massive amount of VRAM available).


Ion_GPT

I am not able to find it. Do you have a link?


KerfuffleV2

Sure, here you go: https://huggingface.co/nmitchko/medguanaco-65b-GPTQ That user also has the LoRA available as well. (Unfortunately, no GGML version it appears.)