Access the full text.
Sign up today, get DeepDyve free for 14 days.
B Fawei, JZ Pan, M Kollingbaum, AZ Wyner (2019)
A semi-automated ontology construction for legal question answeringNew Gener Comput, 37
T Wang, Q Lin (2021)
Hybrid predictive models: when an interpretable model collaborates with a black-box modelJ Mach Learn Res, 22
A Singh, S Mohapatra (2021)
Development of risk assessment framework for first time offenders using ensemble learningIEEE Access, 9
M Miron, S Tolan, E Gómez, C Castillo (2021)
Evaluating causes of algorithmic bias in juvenile criminal recidivismArtif Intell Law, 29
SL Desmarais, SA Zottola, SE Duhart Clarke, EM Lowder (2021)
Predictive validity of pretrial risk assessments: a systematic review of the literatureCrim Justice Behav, 48
Y-H Liu, Y-L Chen (2018)
A two-phase sentiment analysis approach for judgement predictionJ Inf Sci, 44
LE Peterson (2009)
K-nearest neighborScholarpedia, 4
K Kuang, L Li, Z Geng, L Xu, K Zhang, B Liao, H Huang, P Ding, W Miao, Z Jiang (2020)
Causal inferenceEngineering, 6
R Peeters, M Schuilenburg (2018)
Machine justice: Governing security through the bureaucracy of algorithmsInf Polity, 23
M Du, N Liu, X Hu (2019)
Techniques for interpretable machine learningCommun ACM, 63
G Salton, C Buckley (1988)
Term-weighting approaches in automatic text retrievalInf Proces Manag, 24
AB Arrieta, N Díaz-Rodríguez, J Del Ser, A Bennetot, S Tabik, A Barbado, S García, S Gil-López, D Molina, R Benjamins (2020)
Explainable artificial intelligence (xai): concepts, taxonomies, opportunities and challenges toward responsible AIInf Fus, 58
LK Branting, C Pfeifer, B Brown, L Ferro, J Aberdeen, B Weiss, M Pfaff, B Liao (2021)
Scalable and explainable legal predictionArtif Intell Law, 29
SR Safavian, D Landgrebe (1991)
A survey of decision tree classifier methodologyIEEE Trans Syst Man Cybern, 21
A Adadi, M Berrada (2018)
Peeking inside the black-box: a survey on explainable artificial intelligence (xai)IEEE Access, 6
G Dionne (2013)
Risk management: history, definition, and critiqueRisk Manag Insur Rev, 16
T Brennan, W Dieterich, B Ehret (2009)
Evaluating the predictive validity of the compas risk and needs assessment systemCrim Justice Behav, 36
P Hacker, R Krestel, S Grundmann, F Naumann (2020)
Explainable AI under contract and tort law: legal incentives and technical challengesArtif Intell Law, 28
H Mn, I Basheer (2003)
Comparison of logistic regression and neural network-based classifiers for bacterial growthFood Microbiol, 20
The interpretability of AI is just as important as its performance. In the LegalAI field, there have been efforts to enhance the interpretability of models, but a trade-off between interpretability and prediction accuracy remains inevitable. In this paper, we introduce a novel framework called LK-IB for compulsory measure prediction (CMP), one of the critical tasks in LegalAI. LK-IB leverages Legal Knowledge and combines an Interpretable model and a Black-box model to balance interpretability and prediction performance. Specifically, LK-IB involves three steps: (1) inputting cases into the first module, where first-order logic (FOL) rules are used to make predictions and output them directly if possible; (2) sending cases to the second module if FOL rules are not applicable, where a case distributor categorizes them as either “simple” or “complex“; and (3) sending simple cases to an interpretable model with strong interpretability and complex cases to a black-box model with outstanding performance. Experimental results demonstrate that the LK-IB framework provides more interpretable and accurate predictions than other state-of-the-art models. Given that the majority of cases in LegalAI are simple, the idea of model combination has significant potential for practical applications.
Artificial Intelligence and Law – Springer Journals
Published: May 30, 2023
Keywords: Legal knowledge; Model combination; Compulsory measure prediction; Interpretability
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.