Analysis of Memory Efficiency in Sequential Processing
Analyze the two systems described below. What is the core reason for the difference in their memory consumption as the sequence of data grows? Explain why System Beta's approach is more scalable.
0
1
Tags
Ch.2 Generative Models - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Analysis of Memory Efficiency in Sequential Processing
A system is designed to process a continuous stream of data points (e.g., sensor readings) and must maintain an up-to-date average of all points seen so far. Consider two approaches for updating this average after receiving the Nth data point:
Approach 1: Uses a recursive formula that takes the previous average (calculated up to point N-1) and the new Nth data point to compute the new average.
Approach 2: Stores every single data point from 1 to N in a list and recalculates the average of the entire list every time a new point arrives.
As the number of data points (N) grows very large, what is the most significant difference in the memory requirements between these two approaches?
A system uses a recursive formula to update its memory state, where the new state
Mem_iis calculated based on the previous stateMem_{i-1}and the current inputitem_i. For this system to correctly calculate the state at step 1,000,000, it must store all one million individual inputs fromitem_1toitem_1,000,000in its memory.