Gen3: Our 2024 Vision for Generative AI

A text & vision AI system that emulates content understanding and risk assessment.

Generation 3: 2024

Introduction

At Cognitiv+, we are dedicated to advancing the realm of Artificial Intelligence in corporate documents. Throughout 2024, we will dive deeply into two fundamental aspects: Large Language Models (LLMs) and ChatBots. Our mission consists of three main pillars: utilizing existing LLMs for summarizing contracts and clauses, fine-tuning our proprietary Legal-LLM on contractual records, and designing a conversationally adept ChatBot. This article elucidates our planned explorations and accomplishments in each area.

Utilizing Existing LLMs for Contract Summarization

Pretrained Large Language Models, such as Claude, Falcon, LLaMa, PaLM, and Vicuna 33B, exhibit an incredible aptitude for text summarization tasks. By capitalizing on these models, we aim to:

  1. Perform extractive summarization, pinpointing salient phrases directly from the original contract text.
  2. Implement abstractive summarization, producing condensed yet insightful renditions of contracts.
  3. Compare summaries generated by LLMs with those created by humans to gauge performance.

Code Snippet:

import torch
from transformers import PegasusTokenizer, PegasusForConditionalGeneration

tokenizer = PegasusTokenizer.from_pretrained('google/pegasus-large')
model = PegasusForConditionalGeneration.from_pretrained('google/pegasus-large')
input_text = 'Insert entire contract text here...'
inputs = tokenizer(input_text, return_tensors='pt', truncation=True, padding="longest")
summary = model.generate(inputs['input_ids'], max_length=150, min_length=50, early_stopping=True)
print(tokenizer.decode(summary))
                

Fine-tuning Legal-LLM on Contractual Documents

Drawing inspiration from successful endeavors like GEN2, our objective entails fine-tuning a Legal-LLM on voluminous collections of contractual records, notably EURLex and EDGAR. This process intends to bolster accuracy in tasks such as:

  1. Entity recognition: Identifying names of entities, roles, and titles.
  2. Keyword spotting: Detecting important keywords and phrases.
  3. Classification: Distinguishing contract categories, sections, and subsections.

Comparative analyses against custom models will determine if these fine-tuned LLMs surpass incumbent performances.

Designing a Sophisticated ChatBot Interface

Interactive user experiences are paramount in garnering acceptance and fostering seamless integration of AI-based solutions. Hence, we envision a ChatBot engineered to:

  1. Retrieve detailed information pertinent to the contracts, such as party names, obligations, and duration.
  2. Generate lucid summarizations devoid of excessive legalese, easing user comprehension.
  3. Compare summaries generated by LLMs with those created by humans to gauge performance.

Example ChatBot Script:

USER: Which are the contract parties?
CHATBOT: Sure, according to the agreement, the contract parties are XYZ Corp. and ABC Inc. Would you like to know something else?
                

Addressing Challenges: Prompts and Infrastructure

Integrating LLMs into our infrastructure warrants careful consideration of numerous facets, particularly prompt construction and pipelines. Building compatible pipelines and thoughtfully curated prompts can substantiate distinctions between average and exceptional outcomes derived from LLMs. Furthermore, these steps demand concerted attention to:

  1. Input formatting and normalization: Standardizing raw inputs to render them palatable for consumption by LLMs.
  2. Output filtering and enhancement: Refining outputted data to guarantee relevancy, correctness, and presentation style.
  3. Model selection and optimization: Choosing appropriate LLMs to match specific use cases and honing their efficiencies accordingly.

Cost-effective Summarization Techniques

Summarizing elaborate contracts calls for copious human effort, frequently accompanied by steep price tags. As a result, we venture to embrace creative alternatives, including:

  1. Semi-automatic summarization: Blending manual intervention and automated techniques to produce summaries economically.
  2. Transfer learning: Repurposing pre-trained models to augment summarization capabilities swiftly and affordably.
  3. Knowledge graph generation: Constructing concise visual representations to convey crucial contract facts succinctly.

Conclusion

In 2024, Cognitiv+ strives to reshape the legal Contract AI and Document Intelligence landscape by merging Large Language Models with ChatBots. Through diligent exploration and targeted implementations, we aspire to amplify accuracy, streamline interactions, and decrease operating expenses. Ultimately, we aim to deliver intelligent, user-friendly Contract AI solutions primed to serve diverse industries and applications.

References

[1] Pennington, J., Socher, R., & Manning, C. (2014). GloVe: Global Vectors for Word Representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP) (pp. 1532–1543). Association for Computational Linguistics. https://aclanthology.org/D14-1162/

[2] Tomás Mikolov, Kai Chen, Greg Corrado, & Jeffrey Dean (2013). Efficient Estimation of Word Representations in Vector Space. In 1st International Conference on Learning Representations, ICLR 2013, Scottsdale, Arizona, USA, May 2-4, 2013 , Workshop Track Proceedings. https://dblp.org/rec/journals/corr/abs-1301-3781.html

[3] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A., Kaiser, \., & Polosukhin, I. (2017). Attention is All You Need. In Proceedings of the 31st International Conference on Neural Information Processing Systems (pp. 6000–6010). Curran Associates Inc.. https://dl.acm.org/doi/10.5555/3295222.3295349

[4] Hochreiter, S., & Schmidhuber, J. (1997). Long Short-Term Memory. Neural Comput., 9(8), 1735–1780. https://dl.acm.org/doi/10.1162/neco.1997.9.8.1735

[5] Chalkidis, I., Androutsopoulos, I., & Michos, A. (2018). Obligation and Prohibition Extraction Using Hierarchical RNNs. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) (pp. 254–259). Association for Computational Linguistics. https://aclanthology.org/P18-2041.pdf

[6] Yang, Z., Yang, D., Dyer, C., He, X., Smola, A., & Hovy, E. (2016). Hierarchical Attention Networks for Document Classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (pp. 1480–1489). Association for Computational Linguistics. https://aclanthology.org/N16-1174

[7] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, & Yoshua Bengio. (2014). Generative Adversarial Networks. https://arxiv.org/abs/1406.2661

[8] Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, & Dario Amodei. (2020). Language Models are Few-Shot Learners. https://arxiv.org/abs/2005.14165

[9] Zhuang, L., Wayne, L., Ya, S., & Jun, Z. (2021). A Robustly Optimized BERT Pre-training Approach with Post-training. In Proceedings of the 20th Chinese National Conference on Computational Linguistics (pp. 1218–1227). Chinese Information Processing Society of China. https://aclanthology.org/2021.ccl-1.108/

[10] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzman, F., Grave, E., Ott, M., Zettlemoyer, L., & Stoyanov, V. (2020). Unsupervised Cross-lingual Representation Learning at Scale. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (pp. 8440–8451). Association for Computational Linguistics. https://aclanthology.org/2020.acl-main.747/

[11] Patra, B., Singhal, S., Huang, S., Chi, Z., Dong, L., Wei, F., Chaudhary, V., & Song, X. (2023). Beyond English-Centric Bitexts for Better Multilingual Language Representation Learning. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (pp. 15354–15373). Association for Computational Linguistics. https://aclanthology.org/2023.acl-long.856/

[12] Xenouleas, S., Tsoukara, A., Panagiotakis, G., Chalkidis, I., & Androutsopoulos, I. (2022). Realistic Zero-Shot Cross-Lingual Transfer in Legal Topic Classification. In Proceedings of the 12th Hellenic Conference on Artificial Intelligence. Association for Computing Machinery. https://dl.acm.org/doi/abs/10.1145/3549737.3549760

[13] Maroudas, S., Legkas, S., Malakasiotis, P., & Chalkidis, I. (2022). Legal-Tech Open Diaries: Lesson learned on how to develop and deploy light-weight models in the era of humongous Language Models. In Proceedings of the Natural Legal Language Processing Workshop 2022 (pp. 88–110). Association for Computational Linguistics. https://aclanthology.org/2022.nllp-1.8/

[14] Ott, M., Edunov, S., Baevski, A., Fan, A., Gross, S., Ng, N., Grangier, D., & Auli, M. (2019). fairseq: A Fast, Extensible Toolkit for Sequence Modeling. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations) (pp. 48–53). Association for Computational Linguistics. https://aclanthology.org/N19-4009/

[15] Chalkidis, I., Fergadiotis, M., & Androutsopoulos, I. (2021). MultiEURLEX - A multi-lingual and multi-label legal document classification dataset for zero-shot cross-lingual transfer. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (pp. 6974–6996). Association for Computational Linguistics. https://aclanthology.org/2021.emnlp-main.559/

[16] Ilias Chalkidis, Manos Fergadiotis, Prodromos Malakasiotis, & Ion Androutsopoulos (2021). Neural Contract Element Extraction Revisited. CoRR, abs/2101.04355. https://dblp.uni-trier.de/rec/journals/corr/abs-2101-04355.html