Primary
- A Single Direction of Truth: An Observer Model's Linear Residual Probe Exposes and Steers Contextual Hallucinations — O'Neill, Chalnev, Kirkby, Zhao and Jayasekara.
- Resurrecting the Salmon: Rethinking Mechanistic Interpretability with Domain-Specific Sparse Autoencoders — O'Neill, Jayasekara and Kirkby.
- Guanylate-binding proteins: mechanisms of pattern recognition and antimicrobial functions — Kirkby et al.
- Streptococcus makes the cut: Gasdermin A-induced pyroptosis — Zhao, Kirkby and Man.
- Running to save sight: The effects of exercise on retinal health and function — Chu-Tan, Kirkby and Natoli.
Co-Authored
- Exploring Differences in Functional Connectivity in Australian Rules Football Players : A Resting-State fMRI Study on the Default Mode Network — Tran et al.
- Four-Dimensional Flow MRI for Cardiovascular Evaluation (4DCarE): A Prospective Non-Inferiority Study of a Rapid Cardiac MRI Exam: Study Protocol and Pilot Analysis — Qin et al.
- Inflammasome protein scaffolds the DNA damage complex during tumor development — Shen et al.
- Immunity against Moraxella catarrhalis requires guanylate‐binding proteins and caspase‐11‐NLRP3 inflammasomes — Enosi-Tuipulotu et al.
- Voluntary exercise modulates pathways associated with amelioration of retinal degenerative diseases — Chu-Tan et al.
- Molecular mechanisms activating the NAIP‐NLRC4 inflammasome: implications in infectious disease, autoinflammation, and cancer — Kay et al.
Conference Proceedings
- Representation of a hierarchical goal-directed behaviour in medial frontal cortex — Kirkby et al.
- A spatial navigation paradigm for investigating hierarchical planning — Kirkby et al.
- Multi-modal detection of changes in MND for the quantitative evaluation of response to 3K3A-APC treatment — Kirkby et al.
Artefacts
- Practical LoRA Research — Kirkby and O'Neill.
- Training loss predicts evaluation performance, even for non-verifiable tasks — O'Neill and Kirkby.
- Iterative SFT (iSFT): dense reward learning — O'Neill, Liu, Partridge and Kirkby.
- Lumina: building self-improving evaluation through customer-in-the-loop refinement — O'Neill, Partridge, Kirkby, Liu and Stefanopoulos.
- Write small, learn forever: rank-1 LoRA for continual learning — O'Neill, Liu, Kirkby and Partridge.
- Attention-based attribution: what your model is actually looking at — O'Neill, Liu, Canavan, Kirkby and Jayasekara.
- Fine-tuning small open-source LLMs to outperform large closed-source models by 60% on specialized tasks — O'Neill, Jayasekara and Kirkby.