Passion Projects

I've been working on quite a few projects to allow me to explore beyond my current knowledge of the iOS frameworks. The lessons I've learned recently have been very good for me as a developer as well as a designer. It's been nearly a month since I shipped my first iOS 8 app, My Daily Grind , and I haven't stopped ever since.

The essence is that we owe it to ourselves to let it out. Embrace the need to create, and make your own dent in the universe.

I'm learning significantly faster than I would at my current job, which has it's ups and downs. I've always been a quick learner, I tend to dive into an issue or a foreign concept and only come up for air when I'm confident that I've leveled up.

This is why I have my passion projects — they give me the freedom to try any new technology I want, whether new to me or new to the world. I like that. I've used this mentality to continue my learning process in the wonderful world of iOS. My next project on it's final lap is VLNRABLE, which I just finished most of the front-end development today.

Working on this project has been truly inspiring — Dennis and I are hoping to help a lot of people in need of a place to tell their stories in a trusting, healthy, and anonymous environment. Rather than working to get a contract signed or pitching ideas to get a paycheck, I've been sincerely humbled by the chance to change lives, including my own.

Create. Not for the money. Not for the fame. Not for the recognition. But for the pure joy of creating something and sharing it.

Looking through the Twitter app last night game me some great inspiration so I figured I'd check some things out in the morning. So as soon as I woke up, I took an in-depth look at image buffering and performance. I researched benchmarks for Image IO, Accelerate, Core Image, and some low-level C algorithms such as box blur, stack blur, and stack box blur.

// UIImage+Blurring.m



- (UIImage *)stackBlurredImageWithRadius:(CGFloat)radius
    if (radius < 1.0f || CGSizeEqualToSize(self.size, CGSizeZero)) {
        return self;

    CGImageRef inputImage = self.CGImage;
    int bitsPerPixel = CGImageGetBitsPerPixel(inputImage);
    if (bitsPerPixel != 32) {
        UIImage *tempImage = [self normalize];
        inputImage = tempImage.CGImage;
    CFDataRef dataRef = CGDataProviderCopyData(CGImageGetDataProvider(inputImage));
    CFMutableDataRef mutableDataRef = CFDataCreateMutableCopy(0, 0, dataRef);
    UInt8 *pixelBlur = malloc(CFDataGetLength(mutableDataRef));
                   CFRangeMake(0, CFDataGetLength(mutableDataRef)) ,

    CGContextRef context = CGBitmapContextCreate(pixelBlur,

    const CGFloat imageWidth  = CGImageGetWidth(inputImage);
    const CGFloat imageHeight = CGImageGetHeight(inputImage);
    [self.class applyStackBlurToBuffer:pixelBlur

    CGImageRef imageRef = CGBitmapContextCreateImage(context);

    UIImage *outputImage = [UIImage imageWithCGImage:imageRef];
    return outputImage;

- (UIImage *)boxBlurredImageWithRadius:(CGFloat)radius
    if ((radius < 0.0f) || (radius > 1.0f)) {
        radius = 0.5f;

    int boxSize = (int)(radius * 100);
    boxSize -= (boxSize % 2) + 1;

    CGImageRef rawImage = self.CGImage;

    vImage_Buffer inBuffer, outBuffer;
    vImage_Error error; void *pixelBuffer;

    CGDataProviderRef inProvider = CGImageGetDataProvider(rawImage);
    CFDataRef inBitmapData = CGDataProviderCopyData(inProvider);

    inBuffer.width = CGImageGetWidth(rawImage);
    inBuffer.height = CGImageGetHeight(rawImage);
    inBuffer.rowBytes = CGImageGetBytesPerRow(rawImage); = (void *)CFDataGetBytePtr(inBitmapData);

    pixelBuffer = malloc(CGImageGetBytesPerRow(rawImage) * CGImageGetHeight(rawImage)); = pixelBuffer;
    outBuffer.width = CGImageGetWidth(rawImage);
    outBuffer.height = CGImageGetHeight(rawImage);
    outBuffer.rowBytes = CGImageGetBytesPerRow(rawImage);

    error = vImageBoxConvolve_ARGB8888(&inBuffer, &outBuffer, NULL, 0, 0, boxSize, boxSize, NULL, kvImageEdgeExtend);
    if (error) {
        NSLog(@"Error from convolution %ld", error);

    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef ctx = CGBitmapContextCreate(,

    CGImageRef imageRef = CGBitmapContextCreateImage (ctx);
    UIImage *returnImage = [UIImage imageWithCGImage:imageRef];


    return returnImage;



I ended up using GCD groups and notifiers to start the low-level box blur via Accelerate (~80ms), while the stack blur does its work on an asynchronous background thread (~100-200ms, pixel output dependent):

// ViewWithBlurredImage.m


- (UIImage *)image
    if (!_image) {
        __block UIImage *blurredImage, *image;
        image = [UIImage imageNamed:self.imageName];
        blurredImage = [image boxBlurredImageWithRadius:50.0f];
        _image = image;

        [self addFadeAnimationForLayer:self.imageView.layer];
        self.imageView.image = image;

        [self addFadeAnimationForLayer:self.blurredImageView.layer];
        self.blurredImageView.image = blurredImage;

        dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0);
        dispatch_group_t group = dispatch_group_create();
        dispatch_group_async(group, queue, ^{
            blurredImage = [_image stackBlurredImageWithRadius:50.0f];

        dispatch_group_notify(group, dispatch_get_main_queue(), ^{
            [self addFadeAnimationForLayer:self.imageView.layer];
            self.imageView.image = image;

            [self addFadeAnimationForLayer:self.blurredImageView.layer];
            self.blurredImageView.image = blurredImage;
    return _image;



The next step was for me to have the blurring be displayed when the scroll view was dragged past its bounds:

// ViewController.m


- (void)scrollViewDidScroll:(UIScrollView *)scrollView
    CGFloat topOffset = scrollView.contentOffset.y;
    CGFloat imageAlpha = 1.0f;
    CGRect imageViewRect = self.view.imageView.frame;
    UIImage *blurredImage = self.view.blurredImageView.image;

    if (topOffset < 0.0f) {
        imageAlpha = 1.0f - (fabsf(topOffset / imageViewRect.size.height) * 6.0f);
        if (!blurredImage) {
            self.view.blurredImageView.hidden = YES;
    } else {
        if (self.view.blurredImageView.hidden && topOffset == 0.0f && blurredImage) {
            self.view.blurredImageView.hidden = NO;

    self.view.imageView.frame = imageViewRect;
    if (self.view.blurredImageView.image) {
        self.view.blurredImageView.frame = imageViewRect;
        self.view.imageView.alpha = imageAlpha;



I'll add some more cool pointers later this week...