Please note, this blog entry is from a previous course. You might want to check out the current one.
Improve the performance of panning. To do this, you need to try to understand where the CPU cycles are going when the graph is drawn. Is it our inefficient runProgram:usingVariableValues: method? Or is it all the Core Graphics calls we are making each time? Is there a simple way to reduce calls to both of these things in our drawRect:? Or is the performance issue something else entirely? If you are very brave, you can try to figure out how to use the Time Profiler (hold down Run in Xcode and pick Profile, then choose the Time Profiler from the dialog that appears). That’s the way to really know where the time’s going.
Time Profiler shows that runProgram: takes about of 80% of the processing time when pinching or panning. To know when we are actually pinching or panning we add a new private property which will be set as long as a gesture has not finished:
@property (nonatomic) BOOL userIsInTheMiddleOfGesture; ... @synthesize userIsInTheMiddleOfGesture = _userIsInTheMiddleOfGesture; ... if (gesture.state == UIGestureRecognizerStateEnded) self.userIsInTheMiddleOfGesture = NO; else if (!self.userIsInTheMiddleOfGesture) self.userIsInTheMiddleOfGesture = YES;
Drawing only every tenth pixel while userIsInTheMiddleOfGesture will reduce the load by half, e.g. by adjusting drawRect:
NSInteger pixelDelta = 1; if (self.userIsInTheMiddleOfGesture) pixelDelta = 10; for (NSInteger xPixel = 0; xPixel <= widthInPixel; xPixel += pixelDelta) { ....
Another way would be to skip the calculation completely, by using caches values while a gesture is in progress. To do that create a cache property, synthesize it and use lazy instantiation:
@property (nonatomic, strong) NSMutableArray *valueCache; ... @synthesize valueCache = _valueCache; ... - (NSMutableArray *)valueCache { if (!_valueCache) _valueCache = [[NSMutableArray alloc] init]; return _valueCache; }
For the “normal” drawing put the calculated values packaged into NSValue into the cache, and retrieve them during the gesture:
CGPoint value, point; for (NSInteger xPixel = 0; xPixel <= widthInPixel; xPixel += pixelDelta) { if (self.userIsInTheMiddleOfGesture) { value = [[self.valueCache objectAtIndex:xPixel] CGPointValue]; point.x = [self xPointFromValue:value.x inRect:area originAtPoint:origin scale:scale]; point.y = [self yPointFromValue:value.y inRect:area originAtPoint:origin scale:scale]; } else { value.x = [self xValueFromPixel:xPixel inRect:area originAtPoint:origin scale:scale]; id result = [self.dataSource calculateYValueFromXValue:value.x]; if (![result isKindOfClass:[NSNumber class]]) { start = YES; continue; } value.y = [result doubleValue]; point.x = [self xPointFromPixel:xPixel inRect:area]; point.y = [self yPointFromValue:value.y inRect:area originAtPoint:origin scale:scale]; if (xPixel == 0) [self.valueCache removeAllObjects]; [self.valueCache addObject:[NSValue valueWithCGPoint:value]]; } if (self.dataSource.drawDots) { CGContextFillRect(context, CGRectMake(point.x - 0.5, point.y - 0.5, 1.0 , 1.0)); } else { if (start) { CGContextMoveToPoint(context, point.x, point.y); start = NO; } else CGContextAddLineToPoint(context, point.x, point.y); } }
This reduces the time spent in runProgram: during a gesture to zero but has the disadvantage of not redrawing the parts of the graph outside the view.