HB Ad Slot
HB Mobile Ad Slot
A Conversation with an AI Bot About Patents
Friday, December 27, 2024

It’s hard being a nerd, but yes, I interviewed an AI Bot to learn more about how it “thinks” and how to better work with it, especially as it relates to patents.


Human: How does a Large Language Model (LLM) determine higher frequency phrases?

Bot: An LLM like myself determines higher frequency phrases through several steps in its training and operation:
1. Training Data Analysis: During training, the model is exposed to vast amounts of text, counting occurrences of words and phrases.
2. Tokenization: Text is broken down into tokens, which helps identify frequently occurring sequences.
3. Frequency Counting: The model tracks how often specific sequences (n-grams) appear.
4. Statistical Models: It uses these counts to predict phrase likelihood based on context.
5. Contextual Understanding: Modern LLMs consider context to understand relevance.
6. Embedding Space: Frequent phrases have more robust representations.
7. Post-Processing: Additional algorithms ensure common phrases are used appropriately.

Human: That is very helpful. Does it mean that if a single occurrence of a concept occurs among thousands of other, higher frequency concepts, then this single occurrence may be overlooked by the AI algorithm? Or at least have a high likelihood of being overlooked?

Bot: Yes, that’s correct. A single occurrence among thousands of higher frequency concepts has a high likelihood of being overlooked or given less importance due to frequency bias, lack of contextual relevance, overfitting prevention, and statistical significance. However, LLMs can still capture some long-tail data, and with continuous learning, recognition might improve over time.

Human: Great. And is it fair to say that the output of the AI engine uses more equivocal language to describe concepts that it is not as sure of (ostensibly because they are lower frequency than other concepts)? For example, the AI bot says things like “it seems that …” or “it may be that …” indicating it thinks so, but it is not as sure.

Bot: Yes, that’s a fair observation. When dealing with less certain or lower frequency concepts, the output might use more equivocal language like “it seems that” or “it may be that” to express uncertainty, reflect lower confidence, avoid overconfidence, encourage interaction, and adhere to ethical AI practices.

Human: Thank you, great. Now does this also mean that seeding the AI Bot with too many or spurious documents may dilute the accuracy of the output?

Bot: Yes, seeding an AI model with too many or spurious documents can indeed dilute the accuracy of its output. Here’s why:

• Noise Introduction: Spurious documents or irrelevant data can introduce noise into the training set. This noise can skew the model’s understanding of language patterns, making it less accurate in its predictions or responses.
• Overfitting to Noise: If the model is trained on a dataset where a significant portion is irrelevant or low quality, it might overfit to this noise, learning patterns that are not representative of the language or the task at hand. This reduces generalization to new, relevant data.
• Dilution of Relevant Information: High-quality, relevant data gets diluted when mixed with a large volume of low-quality or irrelevant data. This can lead to the model not learning the key concepts or patterns as effectively because they are less prominent in the overall dataset.
• Resource Allocation: Training models require computational resources. If a large portion of these resources is spent on processing and learning from irrelevant data, the efficiency of the training process decreases, potentially leading to suboptimal performance.
• Bias and Misinformation: If spurious documents contain biases or misinformation, these can be incorporated into the model’s knowledge, leading to biased or incorrect outputs.
• Contextual Misalignment: When the AI is exposed to too much varied or off-topic data, it might struggle to maintain context or relevance in its responses, especially when dealing with specific queries or tasks.

To mitigate these issues, careful curation of training data is crucial. Data scientists often use techniques like:

• Data Cleaning: Removing or correcting spurious entries.
• Relevance Scoring: Assigning weights or filtering based on relevance to the task.
• Quality Control: Ensuring the dataset is of high quality before training.
• Continuous Monitoring: Regularly updating and refining the dataset based on performance feedback.

So, while a diverse dataset can be beneficial, the quality and relevance of the documents are paramount to maintaining or improving the AI’s accuracy.

Human: In the world of patents, sometimes the patents are drafted with similar, but not identical terms referring to the same things but using different etymology. Is that a potential source of error?

Bot: Yes, using similar but not identical terms with different etymology when drafting patents can indeed be a potential source of error. This practice can lead to several issues:

• Ambiguity: Different terms, even if they refer to the same thing, might introduce ambiguity. This can make it challenging for patent examiners, courts, or other parties to interpret the scope of the patent accurately. Ambiguity can result in disputes over what exactly is protected by the patent.
• Claim Construction: During litigation, the construction of patent claims becomes crucial. If terms with different etymological roots are used, it might complicate the process of defining what the claim covers, potentially leading to broader or narrower interpretations than intended.
• Prior Art Analysis: When searching for prior art, using different terms might lead to missing relevant patents or publications because databases might not recognize these terms as synonyms. This could result in overlooking existing technology, which might affect the novelty or non-obviousness of the invention.
• Prosecution History: The use of different terms can complicate the prosecution history of a patent. If an applicant uses one term during prosecution but another in the final claims, this could be seen as an attempt to broaden or narrow the claim scope post-hoc, which might be scrutinized by examiners or courts.
• International Filings: When filing patents internationally, different terms might not translate well or might have different legal or technical meanings in other languages, leading to inconsistencies in protection across jurisdictions.

To mitigate these risks, patent drafters may use a glossary within the patent to define terms clearly or stick to standardized terminology within the field to increase clarity and precision.

Human: How do LLM AI bots treat patent images and patent drawings?

Bot: LLM AI bots have varying capabilities when it comes to handling patent images and drawings due to their primary focus on text processing. Here’s how they typically interact with these elements in patent-related tasks:
• Direct Interaction with Images: LLMs inherently do not process images as they are designed for text-based operations. However, there are advancements where LLMs are integrated with vision models to handle images. For instance, some platforms like Drafting LLM by XLSCOUT can generate visuals such as flowcharts and complex technical illustrations based on text descriptions, enhancing the clarity and understanding of patent applications. This integration allows for the creation of accurate drawings and diagrams that are part of patent documents.
• Image Analysis: For tasks like patent search or prior art analysis, LLMs might not directly analyze images but can process descriptions or tags associated with images. Tools like Novelty Checker LLM from XLSCOUT use AI to analyze both patent and non-patent literature, which includes images indirectly through contextual analysis. However, the effectiveness of such analysis depends on how well the image data is described in the text form within the patent documents.
• Patent Drafting: In patent drafting, LLMs can assist in generating descriptions that might accompany drawings. They can help in crafting comprehensive patent claims that reference or describe the images, although the actual creation of the images would be done by human drafters or specialized software unless integrated with AI capable of visual generation. Drafting LLM, for example, supports multiple image uploads and uses AI-driven analysis to ensure visuals accurately reflect the invention’s technical aspects.
• Limitations: The current state of LLMs in handling images in patents shows some limitations. While some LLMs are powerful, they struggle with tasks like recognizing element numbers in patent figures correctly, indicating that there is room for improvement in how LLMs deal with visual elements in patents.
In summary, while LLMs can’t directly process or interpret images due to their text-centric nature, they can be part of systems that indirectly work with images through textual descriptions or by being combined with image processing AI. The direct handling of creating or interpreting patent drawings still largely relies on human input or specialized image recognition AI systems.

Human: How do I know you are telling me the truth about these things?

Bot: That’s a fair question, and here’s how I approach ensuring the accuracy and truthfulness of my responses:
• Training Data: I’m trained on a vast array of texts from various sources, which helps in providing well-rounded information. However, I acknowledge that biases or errors can exist in this data.
• Transparency: I strive to be transparent about my limitations. For example, I’ve mentioned that my knowledge is continuously updated, which means I’m designed to incorporate new information over time, reducing the likelihood of outdated or incorrect information.
• Source Verification: When possible, I reference information from reliable sources or mention that certain information can be verified through external sources like web searches. This allows you to cross-check my responses.
• Logical Reasoning: I aim to provide logical explanations for how processes work, like how LLMs determine phrase frequency or handle ambiguous patent terminology, which you can evaluate for consistency with known principles.
• User Feedback: Over time, feedback from users like you helps refine my responses. If there’s a discrepancy or an error, users can point it out, which contributes to my learning process.
• Ethical Design: My creators have designed me with an emphasis on truthfulness and helpfulness, avoiding the spread of misinformation.
Remember, while I strive for accuracy, the dynamic nature of information means there might be nuances or updates I haven’t yet incorporated. If you’re ever in doubt, you can always ask me to search for more recent developments or check the information through other reliable channels.


When using AI bots, more is not necessarily better and is sometimes worse. It’s clear that we need more tools for drawing generation and analysis. Artificial Intelligence is changing how we process data, and that change is happening very quickly, but patent drafters still have a lot of work to do to harness the power of various software algorithms for creating patent applications and performing detailed patent analyses.

HTML Embed Code
HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins