A Blog About Anime, Code, and Pr0 H4x

If You Want to be Found, Stand Where the Seeker Seeks

March 4, 2012 at 12:00 PM

It's been quite the eventful few months since last I made a "real" blog post. Except, most of those events have just been me being hard at work making apps, writing some sweet h4x, marketing apps, watching some anime, improving apps, yelling at apps, and playing video games.

Studio Bebop has started to really take off, and is seeing more and more success each month. These last few months have been a major learning experience for me, not just in programming, but for business in general. One lesson that's been repeatedly bashed into my head ever since I got home, is that when you are a self-employed app developer, 90% of your job is programming, and 100% of it is marketing.

The app market has EXPLODED these last two years, and since I decided that my old methods of product advertising (READ: using all the spambots I wrote as a freelancer) aren't exactly the most effective/ethical options anymore, I've had to start from square one! Well it's been a long (and sometimes expensive) process, but I feel like I've finally started to get a grip on how to market apps semi-reasonably well. I'm planning on writing an article about this very subject in the near future, so I'll abstain from going into detail here. However, I will say that one really good idea to get your app to stand out, is to put together a promo video. I knew that one day all of that time spent making AMVs would one day come in handy!

On the pr0 h4x side of things, I completely rewrote my heuristic brute force password cracker Gentle-Brute. For those of you wondering what the heck "heuristic brute force password cracking" is, it's essentially a method of brute force cracking wherein the algorithm skips attempts for potential passwords like 'aaaaaa', 'asdf7777772dn', etc. Which in turn leads to a shorter number of potential password permutations to try before finding the correct one, thereby (potentially) drastically reducing the amount of time necessary to crack a password using brute force. This is accomplished by only generating potential passwords that adhere to the rules of English-like words and phrases. I just recently found out that 2600: The Hacker Quarterly is going to publish the article I wrote on this very subject, so once that happens, I'll link to a proper explanation about HBF. (It's super cool I promise.)

Rewriting Gentle-Brute was a lot of fun. It gave me a chance to try my hand at a few things I'd never done before. Like creating my very own super awesome Rubygem. Or using Curses to fulfill a secret lifelong dream of making my own super-hacker looking hash cracker. It's beautiful ;_;

Moving on in the world of h4x, I recently put together a neat little article discussing how to use intrinsic functions on iOS devices with Cortex A8 (and later) CPUs to perform super fast BGRA to Grayscale conversions on CVPixelBuffer frame data. Or as I like to put it, how to use ASM black magic to process eight pixels at a time of a video frame. You should read it, I think it's fascinating.

My brother and I went to Ikkicon over the new years weekend. The con itself was pretty mid-tier, but I got a lot of great ideas for ways to market apps, and I had a great time with my brother and some other friends in Austin. I had to make some adjustments since I last wore it, but I busted out my Emillio cosplay both nights for some good old fashioned self-esteem building. After a bit of coaxing, my brother decided he'd cosplay Castiel from Supernatural, and he looked marvelous.

While we were there, I made the acquisition of some super awesome (and tax-deductible because they are for market research) figures, and a moe moe keychain. Gaze upon their glory!

Finally, to wrap things up, here are the highlights of the anime I've been watching and my thoughts on it.

I finished watching School Rumble, it was magnificent. I had mentioned earlier that the dub was pretty good (which it is), but following the admonition of my friend Preston, I switched to the sub and it was glorious! I love School Rumble, and you should too.

I watched Puella Magi Madoka Magica or (just Madoka if you're not super lame), and it was awesome. So awesome in fact, that I had this 16x20 poster made to adorn my wall with its greatness. The show is basically Cardcaptor Sakura, but with a darker more adult-focused plot full of twists, a la Evangelion. (Maybe that's why there's going to be a Rebuild movie?)

The music is done by one of my favorite anime musicians, Yuki Kajiura. (Noir, Tsubasa Chronicle, .hack//SIGN) The animation was done by SHAFT, and it's not only gorgeous, but also incredibly stylized. Watch Madoka now.

Speaking of super stylized and beautiful animation, I recently finished watching Katanagatari. I really really really liked it. It's one of the most unique anime I've ever watched both in terms of visual presentation, and story. What's even better, is that while Katanagatari may be super unique, it's not a case of all flash and no substance. (Unlike say Dead Leaves for instance.) A word of warning, do not watch the dub, or I will find you and hurt you.

Also, CHEERIO!

^^^ Kinda sorta spoilers FYI ^^^

iOS - How to convert BGRA video streams to Grayscale SUPER fast.

February 26, 2012 at 12:00 PM

==========

For a practical example of this algorithm, you can check out my app See It - Video Magnifier.

==========

I thought I would share a little tidbit of iOS development trickery that I finally got worked out today.

I'm working on an app at the moment that takes in a live video feed from a device's back-facing camera, and does some filtering to each frame before displaying it on the device's screen for the user. One of the filtering options my app does, is converting the video feed from color, to grayscale.

Before I go any farther, I'm going to assume that you are familiar with AVCaptureSession, and setting up everything to access a device camera, and display a live preview feed. If not, take a look at Apple's RosyWriter example, and the AVCamDemo example from the WWDC 2010 example code.

The method I was using at first to accomplish this, was to iterate through each pixel in a frame, calculate that pixel's weighted average of Red, Green, and Blue values, and then apply that weighted average to the red, green, and blue channels of the pixel.

Like so,

- (void)processPixelBuffer: (CVImageBufferRef)pixelBuffer
              int BYTES_PER_PIXEL = 4;
          
              int bufferWidth = CVPixelBufferGetWidth(pixelBuffer);
              int bufferHeight = CVPixelBufferGetHeight(pixelBuffer);
              unsigned char *pixels = (unsigned char *)CVPixelBufferGetBaseAddress(pixelBuffer);
          
              for (int i = 0; i < (bufferWidth * bufferHeight); i++) {
                  // Calculate the combined grayscale weight of the RGB channels
                  int weight = (pixel[0] * 0.11) + (pixel[1] * 0.59) + (pixel[2] * 0.3);
          
                  // Apply the grayscale weight to each of the colorchannels
                  pixels[0] = weight; // Blue
                  pixels[1] = weight; // Green
                  pixels[2] = weight; // Red
                  pixels += BYTES_PER_PIXEL;
              }
          }
          

This above snippet of code will take a BGRA format CVImageBufferRef, and convert it to grayscale. Unfortunately for us, it's not very fast. Using the AVCaptureSessionPresetMedium video input setting, I was getting ~7fps on my 4th generation iPod Touch. (Which isn't necessarily bad, but could be better.)

So whilst Googling about for a method to increase my BGRA to Grayscale conversion algorithm's speed, I came across two articles discussing RGB to Grayscale conversion using ARM NEON intrinsic functions. Specifically,

ARM NEON intrinsic functions is not only a mouthful to say, but is also some sort of serious low-level coding black magic available to devices with ARM Cortex A8 (and later) CPUs, that allows you to be able to perform multiple computations at once. I couldn't explain the theory of it all to you to save my life, so I won't even bother tyring. Suffice it to say that using intrinsic functions will allow us to be able to process eight pixels at a time, as opposed to just one.

Those interested in learning more about the nitty gritty of ARM NEON, should take a look at "Introduction to NEON on iPhone" over on Wandering Coder.

So this article explains how to perform a BGRA to Grayscale conversion on an AVCaptureSession video feed, but it doesn't do it very well. So allow me to help fill in the gaps.

First, prepare your project to use NEON intrinsics by adding '-mfpu=neon ' to your project's "Other C flags", and setting your project's compiler to "LLVM GCC 4.2" (Which you should be using already, since all the cool kids use Automatic Reference Counting these days.)

Next, make sure you add this line to your video processing class' header file, otherwise you're going to get all sorts of frustrating compile errors.

#import "arm_neon.h"
          

Finally, implement the intrinsic BGRA to grayscale method outlined in this article. (Show below.)

void neon_convert (uint8_t * __restrict dest, uint8_t * __restrict src, int numPixels)
          {
              int i;
              uint8x8_t rfac = vdup_n_u8 (77);
              uint8x8_t gfac = vdup_n_u8 (151);
              uint8x8_t bfac = vdup_n_u8 (28);
              int n = numPixels / 8;
          
              // Convert per eight pixels
              for (i=0; i < n; ++i)
              {
                  uint16x8_t  temp;
                  uint8x8x4_t rgb  = vld4_u8 (src);
                  uint8x8_t result;
          
                  temp = vmull_u8 (rgb.val[0],      bfac);
                  temp = vmlal_u8 (temp,rgb.val[1], gfac);
                  temp = vmlal_u8 (temp,rgb.val[2], rfac);
          
                  result = vshrn_n_u16 (temp, 8);
                  vst1_u8 (dest, result);
                  src  += 8*4;
                  dest += 8;
              }
          }
          

This last step however, is the trickiest of all! Because unfortunately that article doesn't really tell you how to use this function, and it also leaves out one very important tidbit of information you're going to need. That being, the result that the BGRA to grayscale method actually creates.

You see, you pass the method a CVPixelBuffer of image data (An array of pixels wherein each pixel has four values representing levels of Blue, Green, Red, and Alpha), along with an empty memory buffer. What the method then does, is fill that buffer with the grayscale value for each pixel in the CVPixelBuffer, which by itself is totally useless.

So in order to actually make your video feed appear grayscale, you have to not only run each preview frame through your intrinsic method, but then apply the created grayscale values to each pixel in your preview frame.

So enough talk, here's all the code you're going to need to make it all happen!

void neon_convert (uint8_t * __restrict dest, uint8_t * __restrict src, int numPixels)
          {
              int i;
              uint8x8_t rfac = vdup_n_u8 (77);
              uint8x8_t gfac = vdup_n_u8 (151);
              uint8x8_t bfac = vdup_n_u8 (28);
              int n = numPixels / 8;
          
              // Convert per eight pixels
              for (i=0; i < n; ++i)
              {
                  uint16x8_t  temp;
                  uint8x8x4_t rgb  = vld4_u8 (src);
                  uint8x8_t result;
          
                  temp = vmull_u8 (rgb.val[0],      bfac);
                  temp = vmlal_u8 (temp,rgb.val[1], gfac);
                  temp = vmlal_u8 (temp,rgb.val[2], rfac);
          
                  result = vshrn_n_u16 (temp, 8);
                  vst1_u8 (dest, result);
                  src  += 8*4;
                  dest += 8;
              }
          }
          
          // Method that processes a CVPixelBuffer representation of a preview frame
          - (void)processPixelBuffer: (CVImageBufferRef)pixelBuffer
          {
              // lock the pixel buffer into place in memory
              CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
          
              // Get the dimensions of the preview frame
              int bufferWidth = CVPixelBufferGetWidth(pixelBuffer);
              int bufferHeight = CVPixelBufferGetHeight(pixelBuffer);
          
              // Turn the CVPixelBuffer into something the intrinsic function can process
              uint8_t *pixel = CVPixelBufferGetBaseAddress(pixelBuffer);
          
              // Allocate some memory for the grayscale values that the intrinsic function will create
              uint8_t * baseAddressGray = (uint8_t *) malloc(bufferWidth*bufferHeight);
          
              // Convert BGRA values to grayscale values
              neon_convert(baseAddressGray, pixel, bufferWidth*bufferHeight);
          
              // Iterate through each pixel in the preview frame, and apply the weighted value of that pixel's RGB color channels
              for (int i = 0; i < (bufferWidth * bufferHeight); i++) {
                  pixel[0] = baseAddressGray[i];
                  pixel[1] = baseAddressGray[i];
                  pixel[2] = baseAddressGray[i];
                  pixel += BYTES_PER_PIXEL;
              }
          
              // Release the grayscale values buffer
              free(baseAddressGray);
          
              // Unlock the pixel buffer, we're done processing it
              CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
          }
          

And that's it! Using this method, I'm able to get ~25fps on the same preview frame that I was getting ~7fps using my original method.

[ Kick it up a notch, with inline assembly! ]

While the performance boost you get from using intrinsic functions is pretty great, you can get even more performance out of your app by using an inline ASM BGRA to Grayscale conversion method! Which is exactly what this article talks about doing, but again, the author doesn't explain how to do it (and there's a bug in his code that makes it unusable).

Luckily for you, myself (and a kind anon), have worked out the kinks, and everything works super smooth now.

All you need to do, is add this method to your existing code,

static void neon_asm_convert(uint8_t * __restrict dest, uint8_t * __restrict src, int numPixels)
          {
              __asm__ volatile("lsr %2, %2, #3 \n"
                           "# build the three constants: \n"
                           "mov r4, #28 \n" // Blue channel multiplier
                           "mov r5, #151 \n" // Green channel multiplier
                           "mov r6, #77 \n" // Red channel multiplier
                           "vdup.8 d4, r4 \n"
                           "vdup.8 d5, r5 \n"
                           "vdup.8 d6, r6 \n"
                           "0: \n"
                           "# load 8 pixels: \n"
                           "vld4.8 {d0-d3}, [%1]! \n"
                           "# do the weight average: \n"
                           "vmull.u8 q7, d0, d4 \n"
                           "vmlal.u8 q7, d1, d5 \n"
                           "vmlal.u8 q7, d2, d6 \n"
                           "# shift and store: \n"
                           "vshrn.u16 d7, q7, #8 \n" // Divide q3 by 256 and store in the d7
                           "vst1.8 {d7}, [%0]! \n"
                           "subs %2, %2, #1 \n" // Decrement iteration count
                           "bne 0b \n" // Repeat unil iteration count is not zero
                           :
                           : "r"(dest), "r"(src), "r"(numPixels)
                           : "r4", "r5", "r6"
                           );
          }
          

And replace

neon_convert(baseAddressGray, pixel, bufferWidth*bufferHeight);
          

with

neon_asm_convert(baseAddressGray, pixel, bufferWidth*bufferHeight);
          

and you're done!

[ Here's some benchmarks ]

==========

For a practical example of this algorithm, you can check out my app See It - Video Magnifier.

==========

A Man Without Fear Cannot Be Wise

November 20, 2011 at 12:00 PM

I have lots of things to talk about, I've been busy this last month!

I got my new computer put together (specs here), and it is awesome. Newegg didn't process my order for some reason the first time around, so I had to wait an extra week and reorder. I decided to splurge and add another 8gb of RAM (for just $50!), now I'm rolling with 16gb of RAM on this machine, and it feels just great. I'm still totally blown away at how far tech has come, and how low prices have become, since I built my last PC in 2005. I remember the ballin' ASUS motherboard I put in my first PC supported a maximum of 4gb of RAM, and now my new PC does that in just one stick! I love it!

Needless to say, my new command center is way legit. I've uploaded some pictures of it to my Photobucket, check it out!

Since I have such a manly computer right now, I figured I might as well capitalize on that, and play some games with all their bells and whistles cranked up to maximum. Unfortunately for me, I've forgotten what my Steam password is. What's worse, I don't know what the answer is to my secret question! Oh well. So I created a new Steam account, and now I only have one game.

At the behest of a few of my friends, I decided to purchase a copy of Batman: Arkham Asylum, and then proceeded to play the crap out of that game. Arkham Asylum is a fantastic game, and I loved every second of it. There's just something magical about lurking in the shadows, picking off the Joker's henchmen one by one, and watching in glee as they start to freak out as their numbers dwindle. Definitely good times.

Once I have some more disposable income, I plan on buying a copy of Skyrim, as well as a copy of Amnesia: The Dark Descent. I'm looking forward to playing Amesia, apparently it is supposed to make me crap my pants in absolute terror, which definitely sounds like my idea of a good time on a Thursday afternoon.

Also at some point I need to pick up a Wii Motion Plus Wiimote, and a copy of Skyward Sword. According to IGN, it is the greatest Zelda game of all time, which is definitely a big claim, but I hope it's true. I love Ocarina of Time, and if this game surpasses it, it's a total win-win for me. Also one to one motion control sword fighting sounds just wonderful.

On the anime front, I've seen/am watching a lot of new things, and in the spirit of brevity, I'll just give you the highlights a la bullet point.

  • I rewatched Gunbuster for the second time, and it was still pretty good. I really wish it was longer than just six episodes though. Everything just feels really rushed and unexplained due in large part to the time constraint. It's not enough to ruin the show for me, but I'd really love to know just what exactly Kazumi Amano does inside the Gunbuster. It seems to me that Noriko is the one who actually handles all the laser beams, and cool poses. Whatever though, it's still super rad, and you should watch it.
  • I've started watching School Rumble, and it is awesome! I seriously can't stop smiling while I'm watching this show. (The dub isn't bad either.)
  • I finally got to finish watching Phantom ~Requiem for the Phantom~, and I really enjoyed it. I got my hands on a dual-audio DVD rip of the series, and was pleasantly surprised at how well Funimation did on the dub. There was the odd voice here and there that made me grimace, but it was never enough for me to switch over to the sub. As is typical with anything BEETRAIN does, there wasn't any blood when people got shot, but that didn't have a major negative impact on my experience either. (I'd rather see at least some blood though.) I'm still on the fence on whether or not BEETRAIN does the whole no-blood thing due to some sort of censorship concern, or if they're just lazy and don't feel like drawing it. Phantom was a good watch, and definitely something you should look in to if you like shows with assassins or mafia themes.
  • I watched the Fate/Stay Night Unlimited Blade Works movie (in glorious 1080p), and loved every single second of it. Visually it was quite captivating, and they did a fantastic job capturing just how powerful the servants are. The movie was definitely geared toward people who have played the VN though. They skip big fat chunks of time/plot over the course of the movie, which isn't a problem if you've played the VN, but would probably make things very confusing and frustrating for someone who hasn't. It's definitely worth watching either way.
  • I've also watched a bunch of Lupin III specials as well, and they were all legit in their own special ways. The one in particular though that I really liked was Episode 0 'First Contact' which was an origin story for how the Lupin gang got together. Goemon cut a lightning bolt in half, it was great.

I've been doing lots of neat programming things too. The great majority of my time has been dedicated to revamping my existing iOS apps, and working on some new ones as well. I won't say anything about the super cool top secret new apps I'm working on right now, since they're still in a pre-release state, but suffice it to say they're going to rock your world.

A major accomplishment for Studio Bebop, that I can talk about, is what has recently happened with the iManga Reader series. Over the last two days, Apple has approved a major update to the free version, and finally approved the pro version for sale again!

I'm very excited about these releases, not only because they're my primary bread winners right now, but also because I really am proud of the improvements and additions I've made. Along with a bunch of miscellaneous bug-fixes and performance tweaks, I've revamped a lot of the UI code to be a lot more 'iOS like', and have added full support for sharing your favorite manga series via Facebook or Twitter. (The recently added native Twitter API for iOS 5 is gloriously simple by the way, and I'm in love with it.)

The feature I'm most proud of though, is the addition of automatic reading statistic tracking for a user using MyAnimeList. For those of you who don't know, MyAnimeList is a website that allows people to keep a detailed log of all the anime and manga that they have watched or read, as well as the stuff they're watching or reading at the moment. It's kinda like if Facebook swallowed Wikipedia, and crapped out a database driven spreadsheet. It's surprisingly fun once you get into it, you can see my profile here.

So in the latest release of the iManga Reader, a user can now just enter their MyAnimeList username and password in the app settings, and the app will automatically track what they're reading and update their MyAnimeList for them every time they start a new series, or finish reading a chapter!

Glorious iManga updates aside, I've also added a new project to the code page. In short, it's a Python application that allows you to automate creating user accounts for any phpBB forum. It also has full support for CAPTCHA cracking via Decaptcher.com.

End Transmission.

The Green Grass Blows in the Wind, Dancing

October 27, 2011 at 12:00 PM

It's been a good day.

I started working on updating/revamping the iManga Reader, using the new Xcode 4 and iOS 5 SDK, and had a great time doing it. There's a bit of a learning curve going from Xcode 3 to Xcode 4, but so far I'm very impressed. It has a very unique UI for an IDE, it looks and feels a lot like iTunes and Komodo Edit had a baby. Also, it's definitely geared a lot more towards iOS development, and has in-app Git tracking support. I'm pretty smitten to be honest with you.

I came across a very useful tool for OSX today. It's called SpeedLimit, and what it does is allows you to simulate different network speeds on your machine for a given port and/or host name. Which is great if you're trying to get a feel for how your iOS app will function when a user is on a slow network connection. Check it out.

While we're on the subject of super useful resources to aid in iOS development, I also found a great website called Glyphish that has lots of free images you can use for all of your UITabBarButtonItem icon needs.

Lastly, as I mentioned earlier, almost all of my old computer hardware is toast and I need a new rig. Well I ordered the parts for a new one today, and it should all be here this weekend! For anyone interested, here are the specs:

I'll post pictures of the new command center once everything gets here and built.

High Hopes in Velvet Ropes

October 26, 2011 at 12:00 PM

Me and my steed

Howdy folks, I'm back.

It's been a long time since I last posted on Teh-1337 (just a bit over two years to be exact), but I'm back and making blog posts once again! For those unaware, I've spent the last two years serving as a missionary for The Church of Jesus Christ of Latter Day Saints (the Mormons) in the Colorado Denver South Mission. It's been a wild and crazy two years full of adventures and awesome stories. I'm still sorting through all of the pictures and videos I took, two years is a long time after all, but I'm planning on doing a multi-part writeup on the adventures of my mission that should be pretty awesome.

Some of you may or may not have noticed, but things look just a little bit different on Teh-1337 than they did before. The reason for this is because I cooked up an awesome new blog engine! (The source code of which you can check out over on my Github.) The new engine is pretty spiffy, and I'm pretty proud of it. It's all still in Python, I was planning on doing it in Ruby with some awesome Sinatra h4x, but that wasn't working out right on my host. One of the spiffy new features of this blog engine is that it actually uses databases now, talk about sophisticated amirite?

So many things have changed over the last two years, it's awesome! So awesome in fact, that I feel the need to make a list.

Honestly there isn't too much else for me to talk about right now. Things have been pretty quiet since I got back. I've been spending my time getting back into living a normal life, figuring out what hardware I can salvage from my old rigs, slowly getting back into the swing of things with Studio Bebop, visiting with family, and catching up on all the anime, TV shows, and movies I've missed these last two years.

More legit blog posts will come soon, I promise!

What Does Justice Feel Like?

November 6, 2008 at 12:00 PM

I am the bone of my blog.

Pythons are my body, and h4x is my blood.

I have created over a thousand posts.

Unknown to sleep. Nor known to grammar.

Have withstood fatigue to create many bots.

Yet, those scripts will never be complete.

So as I pray, "Unlimited Blog Works."

Go to Page