Diagnosing a Poor LLM Output
A project manager uses a large language model to create a project proposal document. They use the following single prompt: 'Write a detailed project proposal for developing a new mobile banking app. The proposal should be professional and comprehensive.' The model produces a 5-page document, but the manager observes that the sections are poorly connected, the financial projections are superficial, and the timeline is inconsistent with the described features. Based on the principles of effective task management for generative models, what is the most likely cause of these quality issues, and what specific procedural change should the manager implement for their next attempt?
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Decomposing a Complex LLM Task: Writing a Blog on AI Risks
LLM-Generated Outlines for Task Decomposition
A user wants to use a large language model to generate a comprehensive guide on 'Sustainable Urban Farming Techniques'. They find that a single, high-level request like 'Write a guide on sustainable urban farming' produces a disorganized and superficial article. Which of the following revised approaches would be most effective for producing a well-structured and detailed guide?
Diagnosing a Poor LLM Output
A user wants to use a large language model to create a script for a short documentary on the impact of renewable energy. To ensure a high-quality, well-structured output, they decide to break the task down into smaller parts. Arrange the following steps in the most logical and effective order.