KF: Thoughts
Spring Papers

I wrote 4 papers this semester for various classes.

MIT: Junior Year

Here's what I've been up to Junior year at MIT.

Augmented Reality, In Real Life

Here's a short one. I found an example of augmented reality in real life. Take a look.

My App Featured on TV in Denmark

My app Is It Tuesday was featured on a talk show in Denmark. Then, they posted an article whose title translates to "Is it an app? Five ridiculously unimportant apps to hit the waiting death".

See the video and full story here.

Then & Now: How Little Things Have Changed

With the debut of the new 21” Retina iMacs a few months ago, Apple published “Then and Now,” a page dedicated to showing the progress of the iMac, from 1998 to the latest model. In it, it shows the incredible advances of desktop computers. While some may see this as progress, I see stagnation.

Read the entire post, here to see the full story.

The House the Marvin Built

Marvin Minsky died this week. Marvin started MIT’s AI Lab, which is now CSAIL (which is the center of computer science at MIT). He left the AI Lab to help start the Media Lab (the most interdisciplinary antidisciplinary lab in the world).

Here’s a quick story about Minsky that we re-created on the chalkboard on Stata.

Marvin Minsky was my advisor’s advisor. Minsky is one of those professors that makes me proud to be an MIT student.

Introducing, Visual History

At Harvard's first ever hackathon, HackHarvard 2015, Joel Gustafson and I created a new Chrome Extension. It's the first extension I've ever worked on, and I think it can be pretty useful. Here's the pitch:

Your browsing history is linear: it's displayed as a list. But you don't browse the web in a linear fashion. You hit the back button, and open up multiple links from the same page. Browsing the web is like a tree, not a list. Our extension, Visual History, shows your browsing history as a tree, which you can easily navigate through from the keyboard.

The extension is available now, in the Chrome Web Store here. We also open sourced the project, and the code is available on Github here. You can also see more screenshots, a video, and a longer description on the project page here.

Of course, this just the beginning of a series of things we do on the computer that are currently linear but really shouldn't be. Stay tuned.

a quick thought on the launch of the ipad pro

For 5 years, the iPad has been called a consumption device. Perfect for Netflix, reading ebooks, and checking emails. And for 5 years, it’s been criticized as a bad device for productivity and creation.

There always have been “content creation” apps available for iPads: word processors, photo editors, spreadsheet apps, etc. But they were mostly ported versions of desktop apps. The apps have been called “touch optimized.” But that’s a lie. “Touch optimized” meant that UI elements were slightly larger, and the more advanced functionality was removed.

Very few apps were truly great. Large multi-touch surfaces should have opened entirely new ways to create stuff. New ways to collaborate, new ways to interact with media, new ways to program, and more. But it never came.

Each year, the hardware improved, but the software stayed stagnant. Finally there is a device which is purely focused on content creation and productivity: the iPad Pro. It even works with a stylus.

Now, more than ever before, there is an opportunity to create entirely new types of apps. Like apps that work with multitouch gestures from both hands with the iPad on a table. Or apps that use the stylus not just for art, but as a way to manipulate data.

Except the software still isn’t there.

Every demonstration of the iPad shows the lowest hanging fruit: drawing apps, for drawing static pictures. Where are the far better ways to create, communicate, and collaborate on tablets? Where is the software that allows people to use tablets to get “serious” work done?

I guess it’s time to go make it...

summer 2015: in review

Here's a quick look back on my summer in California.

virtual monitors

The Setup

Humans are great at categorizing objects spatially. We have bookshelves to arrange physical books, dressers to arrange clothes, and cabinets for food and utensils. These tools make retrieving objects fast and effortless. You can probably picture exactly where your forks and knives are in your kitchen.

However, with computers we are limited by the 4 edges of our small screens. This makes storing information spatially a challenge. Sure, you can scroll around a large window to give the illusion that your computer is bigger — but it’s not the same. On a single screen, it’s hard to quickly reference detailed information while you are working. It’s likely you would have to switch tabs, or windows, or minimize the view at hand to see complementary information. You can’t review a chart, or access a stream of incoming information with just a glance without interruption.

The Current Solution

One solution to this problem is to have more than one screen. Multiple-monitor-setups have become fairly common on desks and in the workplace — especially for programmers.

Organizations that need vast amounts of glance-able information take this to an extreme. NASA engineers and scientists, who have to quickly process lots of incoming information, use control rooms with dozens of monitors.

This solution has two challenges though. First, every additional monitor is an additional expense. Most people can not justify the expense of more than one or two monitors. Second, it’s completely immobile. Multimonitor setups must be on a desk. You can’t carry them around with your laptop.

A better Solution

At MakeMIT 2015, a hackathon at MIT last March, my team and I prototyped a solution to this problem in virtual reality (VR). VR has been called the ultimate display, since it can take your entire view of vision and it appears to be infinite in all directions.

A multimonitor setup can be simulated in VR to quickly glance at different screens with just the turn of your head. Below is a video of our working prototype. As you can see, we have a 3x3, 9 monitor setup. Of course, a user could have as many screens as they want — without any additional cost. Also, the VR headset — in this case, an Oculus VR, is as portable as the laptop that it runs off of. So you can take your multimonitor setup anywhere.

The Future

I love the idea of augmented reality (AR). AR will free us from having to sit at a desk to be productive. We won't be limited to a 15" screen to access information about the real world. With AR, the information will be "in" the real world; it will blend the physical and digital world into one. But for now, we'll have to make baby steps while we wait for the computer vision, high quality see-through displays, and miniaturization to significantly improve.

So where's a good starting point? VR. Virtual Reality doesn't require advanced computer vision, so it will be cheaper and faster for the masses much more quickly than AR. In the long run, I would hope VR would be a completely immersive environment to explore and think in, without a need for the concept of screens. But since that is many years away, this is a good short term solution. Lot of people are focused on the gaming potential of VR — and I won’t deny that gaming has pushed the boundary of computation for decades — but I would love to see more tools for “productivity” in the VR environment.

Special thanks to my teammates: Ben Chrobot, Ariel Wexler, Anthony Kawecki, and Ostin Zarse. At the same hackathon, we also made Baymax.

spring 2015: in review

Here's the third "In Review" video I've made. It's the first one that covers a semester (the previous two were Summer 2014 and IAP 2015). Check it out:

what you see is *

Every developer on the planet knows the first acronym. Very few know the next two. There are many other "What you see is..." phrases, but these three actually matter.

WYSIWYG — What You See Is What You Get: a style of software that supposedly allows users to see the end result of their work. As a replacement for command line control of the computer, WYSIWYG editors promised an intuitive, graphical user interface to create "content". The user gets a preview of what the end result will look like as they create their content. Bravo was the first WYSIWYG editor. It was developed in 1974, and it was the first time users had font options and text formatting in a word processor. These days, we should probably stop just copying 41 year old concepts and create something better.

WYSIWYNC — What You See Is What You Never Could: first coined by Ted Nelson, WYSIWYNC editors really show the limitations of WYSIWYG. Or, at least they would the software actually existed. 54 years after the development of Nelson's WYSIWYNC software, called Project Xanadu, it's yet to be completed. The problem with WYSIWYG is what comes after the 'G'. The full acronym should read: What You See Is What You Get... When You Print It Out. WYSIWYG editors create static content, that a user could simply print out. However, screens allow for dynamic content that can't be represented on paper. Decades after the first GUI, we are still using screens to mimic paper. The world has been waiting for Project Xanadu for so long, most have given up. For the remaining believers, as Nelson says, we fight on.

WYSIATI — What You See Is All There Is: coined by Daniel Kahneman and described in his book Thinking, Fast and Slow. A phrase used to describe a bias in reasoning and decision making: people only consider the information presented in front of them, or information they can easily recall. Rarely do they take the time to realize how little information they have while making a decision. Considering only what one knows, and not understanding how little they know, can lead to poorly formed conclusions.

growing up on letterman

I’ve been watching Letterman for my entire life. By the time I was born, David Letterman had already been hosting a late night show for 13 years, and had been at the Late Show for two years.

When I was in elementary school, I heard The Top 10 List every morning while I ate breakfast. Our local radio station would rebroadcast the previous night’s list, and it would play at the same time every day. My morning routine was timed to the list. To get to school on time, I knew I had to be ready to leave the front door immediately after Dave read number one.

In my middle school days, I started playing tennis and traveling to tournaments all over New York. Some nights, after a long car ride to the hotel, we would get in just late enough that I could fall asleep to Letterman’s opening monologue.

By high school, I saw the show on a regular basis. Whenever I had a lot of homework or a big project to finish, I would be up long enough to hear the announcer say: “And now, from the greatest city in the world” to “this has been Alan Kalter speaking, good night”.

For the first two years of college, I would stay up so late that watching Letterman would be a nice, mid-evening break. Even though I saw the show more often, it was still fun to watch Dave run across the stage before coming out. Listening to Paul Shaffer and The CBS Orchestra, seeing Letterman in his double breasted suit and gray socks, and seeing him tap a pencil on his desk to start the next segment, were all still special occasions that I looked forward to.

Letterman was the perfect combination of clever wit and ridiculous shenanigans. His last show was the evening after my last final exam of Sophomore Year. Colbert will take over at the very beginning of the next school year, I can’t wait. He’s going to be great, but it certainly will be strange not laughing along with Letterman when I’m up past 12am on a weeknight. For all of the laughs, David Letterman: thank you, and good night.

mit: sophomore year

At the end of the school year, I like to record what I spent time on. Here are a few things I've worked on throughout my second year at MIT. (See freshman year here). All of these are the structured parts of my past year. I spent a lot of time working and researching different topics, but I'll save that for a different post.


  • MIT Club Tennis: I continued playing on MIT's club tennis team, which I first joined as a freshman.
  • NASA: I worked at NASA as an intern during IAP. More about my experience here.
  • Science Olympiad @ MIT: We successfully ran MIT's first invitational tournament, bringing over 1,000 high school students from around the country to MIT's campus for the weekend.
  • Thiel Fellowship: I applied for the Thiel fellowship. This was the culmination of months of research into the history of computation. I was selected as a Thiel Fellowship Finalist, and interviewed in San Fransisco. The application process was interesting, however it was even more interesting spending time experimenting and thinking about interaction design.
  • USAGE: I joined the Undergraduate Student Advisory Group in EECS, providing feedback on changes to the EECS department.
  • Zeta Psi: I moved from Simmons into the house this year, and was elected Vice President.


  • 6.004 - Computational Structures
  • 6.005 - Elements of Software Construction
  • 6.006 - Algorithms
  • 14.01 - Microeconomics
  • 6.115 - Microcomputer Project Laboratory (Microcontrollers Lab)
  • 6.s03 - Introduction to EECS II from a Medical Technology Perspective
  • MAS.s66 - Indistinguishable From… Magic as Interface, Technology, and Tradition
  • STS.008 - Technology & Experience

multi (device) tasking

Imagine using multiple devices to accomplish a single task. Instead of multitasking with your computer and phone, what if they seamlessly talked to each other to make individual tasks easier. There would be two benefits to any application that used this concept: new features, and simpler interfaces.

New features could be added because there would be more screen real estate, and there would be new capabilities available with each additional device.

Simpler user interfaces would be possible because you wouldn't be limited to a single screen or a single input method. Options and controls would not be buried behind menus or layers. Each device would be optimized for what that device is best at accomplishing.

I created a proof of concept last year called Open Canvas. It's a drawing app where the canvas is on one screen (an iPad), and the controls (color, brush size, eraser) are on another (an iPhone). It makes it way easier to quickly switch options, without covering up the image. In most other drawing apps, you have to find the controls buried in a menu or drawer. Once you find the controls, they appear over top of the canvas, interfering with the main task of drawing. Open Canvas fixes this problem. By "spreading out" the application on to two devices, it simplifies the user interface.

Another great example of this is Alfred, and Alfred Remote. Alfred is typing-based productivity application for OS X. Alfred Remote is an iPhone/iPad app that augments the desktop app. It can control Alfred on the desktop by pressing large icons on the other devices. It works really well, and it makes it much faster to quickly access actions frequently used in Alfred.

In two weeks, the Apple Watch will be available. Regardless of whether or not it is a success (don't bet against it), one great use case fits this general idea: the Watch as a remote control for apps. Procreate has already announced that you will be able to use the Apple Watch to access the controls of your drawing on the iPhone. I hope the Apple Watch will spark a new generation of apps that connect to large screen versions of the same app.

Open Canvas, Alfred Remote, and (soon-to-be) Procreate Pocket are previews of what's possible with current devices. The underlying technology is already here: Bluetooth LE and the high level APIs to communicate between devices. The problem isn't the technology, it's the concept. The key is real-time communication between devices to work towards a single task. Handoff, a new Apple technology, might seem like the answer at first. But it's not: it's wrong at the conceptual level. It only syncs data, but it doesn't work in real time. An easy to use framework would really open up this category of application.

This entire concept may sound trivial, but I don't think it is. I think it's a step towards the "Internet of Things" promise of connected devices. Before we worry about getting our refrigerator to talk to our toaster, shouldn't we make sure the already-internet-enabled devices can talk to each other?