Why is FileManager so unforgiving?

This is more of a reminder for me as to how to do simple things.

Apple keeps modifying the FileManager API, but doesn’t actually make it obvious as to how to do simple things.  I have Data that I want to save somewhere.  You think it would be as easy as getting a path, then writing it.  Nope.

So here’s a recipe to show how to simply write some data somewhere, overwriting as you do it:

func testSerialization() {
        do {
            let data = try NSKeyedArchiver.archivedData(withRootObject: self.dates, requiringSecureCoding: false)
            let url = writeLocation()  // currently just the caches directory / SomeFolder / SomeFilename.dta
            let fm = FileManager.default
            if fm.fileExists(atPath: url.path) {
                try fm.removeItem(at: url)
            }
            let folder = url.deletingLastPathComponent()
            try fm.createDirectory(at: folder, withIntermediateDirectories: true, attributes: nil)
            let success = fm.createFile(atPath: url.path, contents: data, attributes: nil)

            XCTAssertTrue(success, "Should have written the file!")
            XCTAssertTrue(fm.fileExists(atPath: url.path), "Should have written something here")

        } catch let error {
            XCTFail("Failed with error: \(error.localizedDescription)")
        }
    }

 

Level Up: Test-Driven Development

Until very recently, I’m used to being that hired gun who parachutes in, kills all the work tickets, asks the important questions about the product, makes an app work as desired on a tight deadline to a reasonable level of quality given the constraints, then is out again, all for a reasonably good sum of money. It’s exhausting because these companies often have no established procedures.  So in addition to being Lone Wolf: Kicker of Asses, I’m training junior developers so they can assist me without slowing me down too much, I’m creating an Adhoc QA Department (what is severity, what is reproducibility, how to write bug reports, what is a end-user test suite so you know how to regression test, and why you should use a ticketing system instead of just popping by my desk to effectively pull the plug on whatever 50 things I had in my head), I’m having to interpret incomplete designs, pull the assets out of the Design tools (Zeplin, Sketch, sometimes Photoshop) because many designers don’t know what exactly you need anyway, poke holes and/or fill in the gaps with the UX, and of course manage upwards. Oh yeah, and also do “Agile Waterfall” development, which is borne out of companies who only really do Waterfall but want to be “hip” to the new trends and demand we do Agile (with no scrum master or really anyone who knows how to lead that effectively). So then your time is further taken up with meetings and pushing work tickets around that actually don’t really encapsulate the work you actually need to do, but managers need you to do that so they can generate reports with some data that is meant to impress other people who really have no idea what’s going on anyway, and actually just increased levels of trust in your hires and “we’re good” would be equally effective/ineffective. (Ah, perhaps they don’t know how to make the right hires or can’t get them if they did.) In all of that, I have to “get ‘er done” because the deadline doesn’t change and surprise! All of your dependencies (Design, Backend API) also have the same deadline.

Yikes. A day in the life.  That above would be the worst day in the life.  It’s rarely all of those things.

So I’m grateful for my current freelance contract. It’s the first contract really in years where I felt I’m working in a proper software development company. The management overhead seems low, but the organization is not impacted. They have a process here that works. (I think it’s just because I hit the jackpot and they just placed a priority on the members of the team; personable yet very competent. Ultimately it’s a team who cares about what they do, and making sure there’s a nice team vibe. It also helps that they have corporate backing and therefore have a generous timeframe and budget it would seem.)

“Take the time you need to do a good job.” This is very much the culture here. For one of the first times in my career, I’ve been really exposed  to an office environment where you’re given time to think, and time to write a lot of tests while you develop. You can ask for specifications and those exist and are fixed. There are 2 other iOS devs here to bounce ideas off of, and of course to do code reviews with. It is so extremely satisfying when you get to refactor your original approach into ever more concise code that is more robust and less error prone. Time where you can write the API docs and the Unit Tests to basically freeze the specification. Normally there just isn’t enough time, given all the other tasks that Lone Wolf has, AND the product design always seems to be a moving target.

In short, it feels like I finally have the time and space to level up. Unit Tests are especially awesome when you work in teams and I’m glad for the opportunity to work in this environment for a while so I can establish some good habits and really reach a new plateau in my journey as a software developer.

Today was the first day that Test Driven Development actually justified its existence

I’ve been making apps since iOS 2.2.  People tell me I’m good at it.  Meh.  There’s always a bigger fish.  I love what I do, so chances are I’m not horrible at what I do.

Today was the first day where I used Test Driven Development to actually develop code.  Don’t get me wrong; it’s not like I don’t write unit tests.  I do.  But what I’m referring to right now is where you actually are given the start and passing conditions of a test before there is any code written at all.  In my field of work, this never happens.  The design is ALWAYS a moving target.  Nothing in the startup world is ever known in advance, so although you could write unit tests, it doesn’t always make sense.

I’ve recently been working on the implementation of the rules of Canadian Ice Hockey.  (Don’t ask.)  The sport itself seems pretty straightforward.  Put the puck in the net.  Goal.  Increase Score.  No way!  There are a lot of complicated rules surrounding penalties, but thankfully there is a referee’s handbook that goes over all the complicated scenarios and tells you what the result should be.

Perfect for TDD.  I literally wrote all the unit tests before I wrote the code that would produce the expected results.  I love it because I have to be honest; the solver code I wrote just “feels bad”.  I’m not even certain how parts of it work, and I only wrote it this past week.

What unit tests tell me is: It doesn’t matter!  As long as the tests pass, the code does what it’s supposed to do.  Very satisfying.

 

On the Separation of QA from Development

I don’t know maybe it’s just me, but it’s very important for me to be able to be “in the zone” when I’m programming.  If I’m in the zone, I feel superhuman and am able to accomplish the work of 2 or more people at once.  Being in the zone, in computer techie parlance, is like having a large store of volatile RAM completely loaded with all the information you’re working on trying to solve the problem.  Any time someone turns up to distract you, it’s like they’ve just pulled the plug on your computer.  Now you have to boot back up, load all the data, re-orient yourself with the task, then continue working.  Until the next distraction.

I notice I get extremely hostile when this continues to happen on any given day.  I request home office time a lot, not because I don’t like my co-workers or my work environment, quite the opposite actually, it’s just that I love being in the zone and know that it’s also good for me to be there from a business perspective.

We’re now going through a small QA cycle and so of course there will be bug reports.  What I’ve come to realise is that it’s important to keep testers away from developers.  Because testers will find a bug and feel that they need to tell you about it right away.  But, as mentioned before, they pull the plug each time they do so, then of course probably don’t understand why I slowly want to rip their head off.  “Dude, what’s your problem?”….

Well, here is my problem.  I just explained it.  Use Trello.  Put it on a card list.  Let me get to it asynchronously.  It’s better for everyone that way.  If I need clarification, I will come find you.  Not in any elitist sense, but purely business – my time is worth more than yours.   It’s best to prioritise my efficiency over yours.

Unit Testing Block-based APIs

UPDATE:  I’ve changed my answer now.  The Original Post (marked as such below), represents the solution called “spinlock”.  It’s undesirable.  Have a look at this post on Grand Central Dispatch over at www.raywenderlich.com under the section “Semaphores”

The updated solution, which achieves the same result only better, can be done using a new XCTestCase baseclass for your test cases:

#import <XCTest/XCTest.h>

@interface HSAsynchronousTestCase : XCTestCase
{
   dispatch_semaphore_t _waitSemaphore;
}
- (void)HS_beginAsyncTest;
- (void)HS_completeAsyncTest;
- (void)HS_waitToComplete:(NSTimeInterval)timeoutDuration;

@end
 

@implementation HSAsynchronousTestCase

- (void)HS_beginAsyncTest
{
    _waitSemaphore = dispatch_semaphore_create(0);
}

- (void)HS_completeAsyncTest
{
    if (_waitSemaphore) {
        dispatch_semaphore_signal(_waitSemaphore);
        _waitSemaphore = nil;
    }
    
}

- (void)HS_waitToComplete:(NSTimeInterval)timeoutDuration
{
    dispatch_time_t timeoutTime = dispatch_time(DISPATCH_TIME_NOW, (int64_t) (timeoutDuration * NSEC_PER_SEC));
    if (dispatch_semaphore_wait(_waitSemaphore, timeoutTime)) {
        XCTFail(@"Test timed out");
    }
}

- (void)setUp
{
    [super setUp];
    [self HS_completeAsyncTest]; // in case something went wrong with the last one...
}

@end

 

Then you use it analogously to the code below.  Where the _done = YES; calls are [self HS_completeAsyncTest]; and _done = NO; are the [self HS_beginAsyncTest];.

One more time, you call [self HS_beginAsyncTest]; before your asynchronous method, you call [self HS_completeAsyncTest]; in the completion block(s) of that method, and you call [self HS_waitToComplete: kSomeDurationInSeconds]; at the end of your method.

ORIGINAL POST for Reference

This is a code recipe that I’ve been using for a while and I’m sad to say I’m not even sure who I got it from. Obviously a wonderful person on stackoverflow.com.

I’m putting here for my own purposes as I tend to refer to my own posts at times. “How did I do that again…?”

The problem with any asynchronously executed code is that it finishes after the calling method does. So the method returns before your test is truly finished. We need to stop that method from returning until the asynchronously executed stuff has returned its result. This is how you do it.

Say you have a Unit Test Case subclass

@implementation SomeModel_Tests
{
    __block BOOL _done;  // add a block variable
}

/**
Then add this helper method
*/
- (BOOL)waitForCompletion:(NSTimeInterval)timeoutSecs
{
    NSDate *timeoutDate = [NSDate dateWithTimeIntervalSinceNow:timeoutSecs];
    NSLog(@"Wanting to wait on thread %@", [NSThread currentThread]);
    do
    {
        [[NSRunLoop currentRunLoop] runMode:NSDefaultRunLoopMode beforeDate:timeoutDate];
        if ([timeoutDate timeIntervalSinceNow] < 0.0)
        {
            NSLog(@"Breaking out of waitForCompletion!");
            break;
        }
    }
    while (!_done);
    return _done;
}

// ...
@end

Now we have everything we need to test a block-based API. For example:

- (void)testSomeParsingOperation
{    
    NSURL *contentURL;
    contentURL = [NSURL URLWithString:@"http://www.someurl.com/content.json"];
    NSURLRequest *request = [NSURLRequest contentURL];
    
    JSONParsingRequestOperation *op;  // I just made this up.
    op = [JSONParsingRequestOperation JSONRequestWithRequest:request
                                             completionBlock:^(BOOL success, NSSet *parsedDataObjects, NSError *error)
    {
        XCTAssertTrue(error == nil, @"Parsing should have worked!");
        XCTAssertTrue(parsedDataObjects.count > 0, @"Because I know content.json should have objects in it");
        _done = YES;
    }];
    
    [[AFHTTPClient sharedClient] enqueueHTTPRequestOperation: op];
    
    [self waitForCompletion: 260];
}

That’s it. Have fun.

Updated some info about Mantle

I haven’t been blogging too much recently. My bad. In the meantime I’ve been working on my own App, a Songbook App that allows MIDI control so that if you want to change pages and your hands are busy playing instruments, you can use something like a footswitch controller to change the page.

Check out my Portfolio page for that.

If you’re not a music person, here’s a quick post to tell you that I updated my post about the Mantle Framework. It discusses some findings about working with primitives in your data models, something that wasn’t quite clear to me. In short, Mantle is awesome and does that all for you.

Recipe: a Podfile with different Pods per target

So, say you want to have specific pods for unit testing. You wouldn’t want to include those in your main target, now would you?? Of course not! So, here’s a quick recipe so you can see how you would set that up in your Podfile.

Also assuming you checked the box ‘Include Unit Tests’ when you set up your Xcode project. Otherwise, see here

If the name of your app Target is MyGreatApp and your unit testing target is: MyGreatAppTests

platform :ios, "5.0"  #or whichever you need to support.  Your deployment target

pod "DCIntrospect"  # most awesome tool
pod "AFNetworking", "~>1.3.2"  # a standard lib iOS apps
pod "SSZipArchive"  # zipping/unzipping anyone?
pod "MD5Digest"   # don't let your filepaths be corrupted by newlines and spaces again.  
pod "MGImageUtilities"  #don't cache images for UIImageViews with fixed frames using a UIViewContentModeScaleAspectFi... resize and cache those

target :test, :exclusive => true do
    link_with 'MyGreatAppTests' #i.e. the test target name in your project
    
    pod 'OCMock', '~> 2.1.1'  #OCMock.  Great helper
    pod 'Expecta', '~> 0.2.1'  #nice syntax.  you can write readable pass conditions.

end