Resume Greg Borenstein is a game designer, technologist, and teacher. His work explores game design, computer vision, drawing, machine learning, and generative storytelling as media for play and design. He currently works as a technical game designer at Riot Games. I worked as the consulting futurist, helping imagine the world of And I co-wrote the ninth episode of the season, "Memento Mori".

Author:Akinolar Vojin
Language:English (Spanish)
Published (Last):1 August 2004
PDF File Size:5.11 Mb
ePub File Size:16.99 Mb
Price:Free* [*Free Regsitration Required]

Cold War decentralization gave us the Internet. Terrorism and mass surveillance: Kinect. Just like the pr- e miere of the personal computer or the Internet, the release of the Kinect was another moment when the fruit of billions of dollars and decades of research that had previously only been available to the military and the intelligence community fell into the hands of regular people.

Face recognition, gait analysis, skeletonization, depth imaging—this cohort of technologies that had been developed to detect terrorists in public spaces could now suddenly be used for creative civilian purposes: building gestural interfaces for software, building cheap 3D scanners for personalized fabrica- tion, using motion capture for easy 3D character animation, using biometrics to create customized assistive technologies for people with disabilities, etc.

And, with the arrival of the Kinect, the ability to create these applications is now within the reach of even weekend tinkerers and casual hackers.

Just like the personal computer and Internet revolutions before it, this Vision Revolution will surely also lead to an astounding flowering of creative and pr- o ductive projects.

Comparing the arrival of the Kinect to the personal computer and the Internet may sound absurd. But keep in mind that when the personal computer was first invented, it was a geeky toy for tinkerers and enthusiasts.

All of these technologies only came to as- sume their critical roles in contemporary life slowly as individuals used them to make creative and innovative applications that eventually became fixtures in our daily lives. Right now it may seem absurd to compare the Kinect with the PC and the Internet, but a few decades from now, we may look back on it and compare it with the Altair or the ARPAnet as the first baby step toward a new technological world.

The purpose of this book is to provide the context and skills needed to build exactly these projects that reveal this newly possible world. Learning these skills means not just mastering a particular software library or API, but understanding the principles behind them so that you can apply them even as the practical details of the technology rapidly evolve.

And yet even mastering these basic skills will not be enough to build the projects that really make the most of this Vision Revolution. To do that, you also need to understand some of the wider context of the fields that will be revolutionized by the cheap, easy availability of depth data and skeleton in- formation. To that end, this book will provide introductions and conceptual viii PrefaceWho This Book Is For overviews of the fields of 3D scanning, digital fabrication, robotic vision, and assistive technology.

The last three chapters of this book will explore these topics through a series of in-depth projects. This book will not be a definitive reference to any of these topics; each is vast, comprehensive, and filled with its own fascinating intricacies.

This book aims to serve as a provocative introduction to each area—giving you enough con- text and techniques to start using the Kinect to make interesting projects and hoping that your progress will inspire you to follow the leads provided to in- vestigate further. Who This Book Is For At its core, this book is for anyone who wants to learn more about building creative interactive applications with the Kinect, from interaction and game designers who want to build gestural interfaces to makers who want to work with a 3D scanner to artists who want to get started with computer vision.

That said, you will get the most out of it if you are one of the following: a begin- ning programmer looking to learn more sophisticated graphics and interac- tions techniques, specifically how to work in three dimensions, or an advanced programmer who wants a shortcut to learning the ins and outs of working with the Kinect and a guide to some of the specialized areas that it enables.

This book is designed to proceed slowly from introductory topics into more sophisticated code and concepts, giving you a smooth introduction to the fundamentals of making interactive graphical applications while teaching you about the Kinect.

The goal is for you to level up from a beginner to a confident intermediate interactive graphics programmer. The Structure of This Book The goal of this book is to unlock your ability to build interactive applications with the Kinect. Member - ship in this Revolution has a number of benefits. However, membership in this Revolution does not come for free.

These skills are the basis of all the more advanced ben- efits of membership, and all of those cool abilities will be impossible without them. This book is designed to build up those skills one at a time, starting from the simplest and most fundamental and building toward the more complex and sophisticated. Toward this end, the first half of this book will act as a kind of primer in these programming skills.

Before we dive into controlling robots or 3D printing our faces, we need to start with the basics. The first four chapters of this book - cov er the fundamentals of writing Processing programs that use the data from the Kinect.

Processing is a creative coding environment that uses the Java programming language to make it easy for beginners to write simple interactive applications that include graphics and other rich forms of media. These concepts include looping through arrays of pixels, basic 3D drawing and orientation, and some simple geometric calculations.

I will attempt to explain each of these concepts clearly and in depth. One nice side benefit to this approach is that these fundamental skills are rel- evant to a lot more than just working with the Kinect. If you master them here in the course of your work with the Kinect, they will serve you well throughout all your other work with Processing, unlocking many new possibilities in your work, and really pushing you decisively beyond beginner status.

There are three fundamental techniques that we need to build all of the fancy applications that make the Kinect so exciting: processing the depth image, working in 3D, and accessing the skeleton data. The first half of this book will serve as an introduction to each of these techniques. Unlike conventional images in which each pixel records the color of light that reached the camera from that part of the scene, each pixel of this depth image records the distance of the object in that part of the scene from the Kinect.

When we look at depth images, they will look like strangely distorted black and white pictures. They look strange because the color of each part of the image indicates not how bright that object is, but how far away it is.

The brightest parts of the image are the closest, and the darkest parts are the farthest away. If we write a Processing program that -ex amines the brightness of each pixel in this depth image, we can figure out the distance of every object in front of the Kinect.

Using this same technique and a little bit of clever coding, we can also follow the closest point as it moves, which can be a convenient way of tracking a user for simple interactivity. Clouds This first approach treats the depth data as if it were only two-dimensional. It looks at the depth information captured by the Kinect as a flat image when really it describes a three-dimensional scene. For each pixel in the depth image, we can think of its position within the image as its x-y coordinates.

You can think of this point cloud as the 3D equivalent of a pixelated image. While it might look solid from far away, if we look closely, the image will break down into a bunch of distinct points with space visible be - tween them. First of all, the point cloud is just cool.

Having a live 3D representation of yourself and your surroundings on your screen that you can manipulate and view from dif- ferent angles feels a little bit like being in the future. Another frequent area of confusion in 3D drawing is the concept of the camera. To translate our 3D points from the Kinect into a 2D image that we can actually draw on our flat computer screens, Processing uses the metaphor of a camera.

Just as a real camera flattens the ob - jects in front of it into a 2D image, this virtual camera does the same with our 3D geometry. Everything that the camera sees gets rendered onto the screen from the angle and in the way that it sees it. The third technique is in some ways both the simplest to work with and the most powerful. One of the big advantages of depth images is that computer vision algorithms work better on them than on conventional color images.

The reason Microsoft developed and shipped a depth camera as a controller for the Xbox was not to show players cool looking point clouds, but because they could run software on the Xbox that processes the depth image in order to locate people and find the positions of their body parts.

By using the right Processing library, we can get access to this user position data without having to implement this incredibly sophisticated skeletoniza- tion algorithm ourself. These new techniques will serve as the basic vocabulary for some exciting new interfaces we can use in our sketches, letting users communicate with us by striking poses, doing dance moves, and performing exercises among many other natural human movements.

With the Kinect, things like 3D scanning and ad- vanced robotic vision are suddenly available to anyone with a Kinect and an understanding of the fundamentals described here. But to make the most of these new possibilities, you need a bit of background in the actual application areas.

The final two chapters will provide you with introductions to exactly these topics: 3D scanning for fabrication and 3D vision for robotics. Its achievements include robots that have driven on the moon and ones that assemble auto- mobiles. This will be an experiment in inverse kinematics. This is a much harder problem than the forward kinematic problem.

A serious solution to it can involve complex math and confusing code. None of these chapters are meant to be definitive guides to their respective areas, but instead to give you just enough background to get started applying these Kinect fundamentals in order to build your own ideas. Instead of proceeding slowly and thoroughly through comprehensive explanations of principles, these later chapters are structured as individual projects.

It will be exciting. Then, at the end of the book, our scope will widen. In addition to exploring other programming environments, you can take your Kinect work further by learning about 3D graphics in general. OpenGL is a huge, complex, and powerful system, and Pro- cessing only exposes you to the tiniest bit of it.

Learning more about OpenGL itself will unlock all kinds of more advanced possibilities for your Kinect - ap plications. This book would have been inconceivable without them. Kyle McDonald and Zach Lieberman taught a short, seven-week class in the spring of that changed my life. That course introduced me to many of the techniques and concepts I attempt to pass on in this book.

I hope my Preface xvUsing Code Examples presentation of this material is half as clear and thorough as theirs. Further, Zach came up with the idea for the artist interviews, which ended up as one of my favorite parts of this book. And Kyle was invaluable in helping me translate his work on 3D scanning for fabrication, which makes up the soul of Chapter 5. Lily Szajnberg was my first student and ideal reader. Andrew was the first person—even before me—to believe I could write a book.

His early feedback helped turn this project from a very long blog post into a book. Max Rheiner created the SimpleOpenNI library I use throughout this book and acted as a technical editor making sure I got all the details right. This book would have been more difficult and come out worse without his work. Your work, and the hope of seeing more like it, is why I wrote this book.

Huge thanks to Liz Arum and Matt Griffin from MakerBot as well as Catarina Mota, who helped me get up to speed on making good prints, and Duann Scott from Shapeways, who made sure my prints would arrive in time to be included.

Using Code Examples This book is here to help you get your job done. In general, you may use the code in this book in your programs and documentation. For example, writing a program that uses several chunks of code from this book does not require permission. Answering a question by citing this book and quoting example code does not require permission.

An attribution usually includes the title, author, publisher, and ISBN. Copyright Greg Borenstein, Conventions Used in This Book The following typographical conventions are used in this book: Italic Indicates new terms, URLs, email addresses, filenames, and file extensions. Constant width Used for program listings, as well as within paragraphs to refer to program elements such as variable or function names, databases, data types, envi- ronment variables, statements, and keywords.

Constant width bold Shows commands or other text that should be typed literally by the user. Constant width italic Shows text that should be replaced with user-supplied values or by values determined by context. This box signifies a tip, suggestion, or general note. Warning This box indicates a warning or caution. With a subscription, you can read any page and watch any video from our library online.


Making Things See



Making Things See : 3D Vision with Kinect, Processing, and Arduino



3D vision with Kinect, Processing, Arduino, and MakerBot


Related Articles