Hallucination reduction with CASAL: Contrastive Activation Steering For Amortized Learning
Anzeige
Ähnliche Artikel
arXiv – cs.LG
•
AdaGradSelect: Adaptive Blockauswahl beschleunigt das Feintuning von SLMs
arXiv – cs.AI
•
Neues Verfahren steigert Tiefen-Generalisierung von Sprachmodellen bei Logikaufgaben
arXiv – cs.AI
•
Efficiency vs. Alignment: Investigating Safety and Fairness Risks in Parameter-Efficient Fine-Tuning of LLMs
arXiv – cs.LG
•
ScaLoRA: Optimally Scaled Low-Rank Adaptation for Efficient High-Rank Fine-Tuning
arXiv – cs.LG
•
TinyGraphEstimator: Adapting Lightweight Language Models for Graph Structure Inference
arXiv – cs.LG
•
Fine-tuning of Large Language Models for Domain-Specific Cybersecurity Knowledge