Files
AIML430_seminar_notes/mon4thaugust/mon4thaugust.md
2025-08-18 11:11:24 +12:00

1.7 KiB

Athletic performance

The study was using more classifical machine learning techinques to take data collected on athletes in women basketball to predict various performance indicators.

I agree with the limitations they posed where I think the most limiting factor of this study is how hand tuned and manual the learning process was.

Some questions I would of asked were:

  • Did the authors look into using a more flexibly neural network style archetecture? Specifcally that it might be able to train a base model that could then by fine tuned to differnet sports/divisions?
  • Am I correct in understanding that the model only took 8 features that were expanded to be 20 values feed into the model.
  • Lastly was there any teams given acess to this data and prediction to see what coaches atleast thought about its usefullness?

Conversational diagnostics

This study developed and explored a special (assuming just fine tuning) process to amek a LLM that is useful in "replacing" a doctor. It is interesting to hear that it did so well on many of the metrics they used, I find it matches my intuition that a model could do better than a human at diagnosis. However as pointed out in the summary this model would still be a long way away from being 'deployed' due to various eningeering and regulartory problems.

Some questions I have are:

  • I assume that this model was fine tuned from a base LLM. What was this base llm and how much self play did it have? If it was trained from scratch how large was the model, however it seems like the training data and process is somewhat secret?
  • Was the experiment done blind if so was the patient asked for their prediction of if who they spoke with was an AI or human?