OpenGL ES 2.0 for iPhone Tutorial

来源:http://www.raywenderlich.com/3664/opengl-es-2-0-for-iphone-tutorial

If you're new here, you may want to subscribe to my RSS feed or follow me on Twitter. Thanks for visiting!

Learn how to use OpenGL ES 2.0 from the ground up!

Learn how to use OpenGL ES 2.0 from the ground up!

OpenGL ES is the lowest-level API that you use to program 2D and 3D graphics on the iPhone.

If you’ve used other framework such as Cocos2D, Sparrow, Corona, or Unity, these are all built on top of OpenGL!

One of the reasons why programmers like to use the above frameworks rather than using OpenGL directly is because OpenGL is notoriously difficult to learn.

And that’s what this tutorial is for – to make the learning curve a little less steep for beginner OpenGL developers!

In this series, you’ll get hands-on experience with OpenGL ES 2.0 and will create a simple “Hello, World” app that displays some simple geometry.

In the process, you’ll learn the following:

  • How to get a basic OpenGL app working from scratch
  • How to compile and run vertex & fragment shaders
  • How to render a simple square to the screen with vertex buffer objects
  • How to apply projection and model-view transforms
  • How to render a 3D object with depth testing

Caveat: I am not an Open GL expert! I am learning this myself, and am writing tutorials as I go. If I make any boneheaded mistakes, feel free to chime in with corrections or insights! :]

Without further ado, let’s start learning OpenGL ES!

 

Open GL ES 1.0 vs OpenGL ES 2.0

Open GL ES 2.0 - Y U Make Me Write Everything?

First things first – you should know that there are two different versions of OpenGL ES (1.0 and 2.0), and they are very different.

OpenGL ES 1.0 uses a fixed pipeline, which is a fancy way of saying you use built-in functions to set lights, vertexes, colors, cameras, and more.

OpenGL ES 2.0 uses a programmable pipeline, which is a fancy way of saying all those built-in functions go away, and you have to write everything yourself.

“OMG!” you may think, “well why would I ever want to use OpenGL ES 2.0 then, if it’s just extra work?!” Although it does add some extra work, with OpenGL ES 2.0 you make some really cool effects that wouldn’t be possible in OpenGL ES 1.0, such as this toon shader (via Imagination Technologies):

Toon Shader with OpenGL ES 2.0

Or even these amazing lighting and shadow effects (via Fabien Sanglard):

Lighting and Shadow Shaders with OpenGL ES 2.0

Pretty cool eh?

OpenGL ES 2.0 is only available on the iPhone 3GS+, iPod Touch 3G+, and all iPads. But the percentage of people with these devices is in the majority now, so that’s what we’re going to focus on in this tutorial!

Getting Started

Although Xcode comes with an OpenGL ES project template, I think that’s confusing for beginners because you have to go through a lot of code you didn’t write yourself and try to understand how it works.

I think it’s easier if you write all the code from scratch, so you can understand how everything fits together – so that’s what we’re going to do here!

Start up Xcode, and go to File\New\New Project. Select iOS\Application\Window-based Application, and click Next. Name your project HelloOpenGL, click Next, choose a folder to save it in, and click Create.

Compile and run the app, and you should see a blank white screen:

Window Based Application Template

The Window-based Application template is about as “from scratch” as you can get with project templates in Xcode. All it does is create a UIWindow and present it to the screen – no views, view controllers, or anything!

Let’s add a new view that we’ll use to contain the OpenGL content. Go to File\New\New File, choose iOS\Cocoa Touch\Objective-C class, and click Next. Enter UIView for Subclass of, click Next, name the new class OpenGLView.m, and click Save.

Next, you’ll add a bunch of code to OpenGLView.m inside the @implementation to just color the screen green for now.

Add each step here bit by bit, and I’ll explain what each part does as we go.

1) Add required frameworks

The first step is to add the two frameworks you need to use OpenGL – OpenGLES.frameworks and QuartzCore.framework.

To add these frameworks in Xcode 4, click on your HelloOpenGL project in the Groups & Files tree, and select the HelloOpenGL target. Expand the Link Binary with Libraries section, click the + button, and select OpenGLES.framework. Repeat for QuartzCore.framework as well.

Adding OpenGL Frameworks in Xcode 4

2) Modify OpenGLView.h

Modify OpenGLView.h to look like the following:

#import <UIKit/UIKit.h>
#import <QuartzCore/QuartzCore.h>
#include <OpenGLES/ES2/gl.h>
#include <OpenGLES/ES2/glext.h>@interface OpenGLView : UIView {CAEAGLLayer* _eaglLayer;EAGLContext* _context;GLuint _colorRenderBuffer;
}@end

This imports the headers you need for OpenGL, and creates the instance variables that the methods you wrote earlier were using.

3) Set layer class to CAEAGLLayer

+ (Class)layerClass {return [CAEAGLLayer class];
}

To set up a view to display OpenGL content, you need to set it’s default layer to a special kind of layer called a CAEAGLLayer. The way you set the default layer is to simply overwrite the layerClass method, like you just did above.

4) Set layer to opaque

- (void)setupLayer {_eaglLayer = (CAEAGLLayer*) self.layer;_eaglLayer.opaque = YES;
}

By default, CALayers are set to non-opaque (i.e. transparent). However, this is bad for performance reasons (especially with OpenGL), so it’s best to set this as opaque when possible.

5) Create OpenGL context

- (void)setupContext {   EAGLRenderingAPI api = kEAGLRenderingAPIOpenGLES2;_context = [[EAGLContext alloc] initWithAPI:api];if (!_context) {NSLog(@"Failed to initialize OpenGLES 2.0 context");exit(1);}if (![EAGLContext setCurrentContext:_context]) {NSLog(@"Failed to set current OpenGL context");exit(1);}
}

To do anything with OpenGL, you need to create an EAGLContext, and set the current context to the newly created context.

An EAGLContext manages all of the information iOS needs to draw with OpenGL. It’s similar to how you need a Core Graphics context to do anything with Core Graphics.

When you create a context, you specify what version of the API you want to use. Here, you specify that you want to use OpenGL ES 2.0. If it is not available (such as if the program was run on an iPhone 3G), the app would terminate.

6) Create a render buffer

- (void)setupRenderBuffer {glGenRenderbuffers(1, &_colorRenderBuffer);glBindRenderbuffer(GL_RENDERBUFFER, _colorRenderBuffer);        [_context renderbufferStorage:GL_RENDERBUFFER fromDrawable:_eaglLayer];    
}

The next step to use OpenGL is to create a render buffer, which is an OpenGL object that stores the rendered image to present to the screen.

Sometimes you’ll see a render buffer also referred to as a color buffer, because in essence it’s storing colors to display!

There are three steps to create a render buffer:

  1. Call glGenRenderbuffers to create a new render buffer object. This returns a unique integer for the the render buffer (we store it here in _colorRenderBuffer). Sometimes you’ll see this unique integer referred to as an “OpenGL name.”
  2. Call glBindRenderbuffer to tell OpenGL “whenever I refer to GL_RENDERBUFFER, I really mean _colorRenderBuffer.”
  3. Finally, allocate some storage for the render buffer. The EAGLContext you created earlier has a method you can use for this called renderbufferStorage.

7) Create a frame buffer

- (void)setupFrameBuffer {    GLuint framebuffer;glGenFramebuffers(1, &framebuffer);glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, _colorRenderBuffer);}

A frame buffer is an OpenGL object that contains a render buffer, and some other buffers you’ll learn about later such as a depth buffer, stencil buffer, and accumulation buffer.

The first two steps for creating a frame buffer is very similar to creating a render buffer – it uses the glGen and glBind like you’ve seen before, just ending with “Framebuffer/s” instead of “Renderbuffer/s”.

The last function call (glFramebufferRenderbuffer) is new however. It lets you attach the render buffer you created earlier to the frame buffer’s GL_COLOR_ATTACHMENT0 slot.

8) Clear the screen

- (void)render {glClearColor(0, 104.0/255.0, 55.0/255.0, 1.0);glClear(GL_COLOR_BUFFER_BIT);[_context presentRenderbuffer:GL_RENDERBUFFER];
}

We’re trying to get something displaying on the screen as quickly as possible, so before dealing with vertexes, shaders, and the like, let’s just clear the entire screen to a particular color!

Let’s set it to the main color of this website, which is RGB 0, 104, 55. Notice you have to divide the color values by 255 (the max color value), because the color range for each component is from 0 to 1.

To accomplish this we have to take three steps here:

  • Call glClearColor to specify the RGB and alpha (transparency) values to use when clearing the screen.
  • Call glClear to actually perform the clearing. Remember that there can be different types of buffers, such as the render/color buffer we’re displaying, and others we’re not using yet such as depth or stencil buffers. Here we use the GL_COLOR_BUFFER_BIT to specify what exactly to clear – in this case, the current render/color buffer.
  • Call a method on the OpenGL context to present the render/color buffer to the UIView’s layer!

9) Wrapup code in OpenGLView.m

// Replace initWithFrame with this
- (id)initWithFrame:(CGRect)frame
{self = [super initWithFrame:frame];if (self) {        [self setupLayer];        [self setupContext];                [self setupRenderBuffer];        [self setupFrameBuffer];                [self render];        }return self;
}// Replace dealloc method with this
- (void)dealloc
{[_context release];_context = nil;[super dealloc];
}

This is just some helper code to call all of the above methods, and clean up in dealloc.

10) Hook up OpenGLView to the App Delegate

Make the following changes to HelloOpenGLAppDelegate.h:

// At top of file
#import "OpenGLView.h"// Inside @interface
OpenGLView* _glView;// After @interface
@property (nonatomic, retain) IBOutlet OpenGLView *glView;

And the following changes to HelloOpenGLAppDelegate.m:

// At top of file
@synthesize glView=_glView;// At top of application:didFinishLaunchingWithOptions
CGRect screenBounds = [[UIScreen mainScreen] bounds];    
self.glView = [[[OpenGLView alloc] initWithFrame:screenBounds] autorelease];
[self.window addSubview:_glView];// In dealloc
[_glView release];

This simply creates a new instance of the OpenGLView at startup, and attaches it to the window.

And that’s it! Compile and run your project, and you should see a green screen drawn by OpenGL ES 2.0!

Green Screen rendered with OpenGL

Adding Vertex and Fragment Shaders

In OpenGL ES 2.0, to render any geometry to the scene, you have to create two tiny little programs called shaders.

Shaders are written in a C-like language called GLSL. Don’t worry too much about studying up on the reference at this point – you can get just by looking at the examples in this tutorial for now.

There are two types of shaders:

  • Vertex shaders are programs that get called once per vertex in your scene. So if you are rendering a simple scene with a single square, with one vertex at each corner, this would be called four times. Its job is to perform some calculations such as lighting, geometry transforms, etc., figure out the final position of the vertex, and also pass on some data to the fragment shader.
  • Fragment shaders are programs that get called once per pixel (sort of) in your scene. So if you’re rendering that same simple scene with a single square, it will be called once for each pixel that the square covers. Fragment shaders can also perform lighting calculations, etc, but their most important job is to set the final color for the pixel.

Like I said, the easiest way to understand these is to look at some examples. Let’s keep things nice and simple and write the simplest possible vertex and fragment shaders we can!

In Xcode, go to File\New\New File…, choose iOS\Other\Empty, and click Next. Name the new file SimpleVertex.glsl, and click Save.

Open up SimpleVertex.glsl and add the following code:

attribute vec4 Position; // 1
attribute vec4 SourceColor; // 2varying vec4 DestinationColor; // 3void main(void) { // 4DestinationColor = SourceColor; // 5gl_Position = Position; // 6
}

Everything is new here, so let’s go over it line by line!

  1. The attribute keyword declares that this shader is going to be passed in an input variable called Position. Later on, you’ll write some code to pass some data into this input variable. It will be used to indicate the position of the Vertex. Note that the type of the variable is a vec4, which means a vector with 4 components.
  2. This declares a second input variable, for the color of the vertex.
  3. This declares another variable, but it doesn’t have the attribute keyword, so is an output variable that will be passed to the fragment shader. It also has the varying keyword, which is a fancy way of saying “I’m going to tell you the value for a particular vertex, but when you need to figure out the value for a given pixel, figure it out by smoothing out the values between nearby vertexes.”

Huh? That last bit sounds confusing, but is actually pretty easy to understand if you see a picture:

Varying Keyword in OpenGL ES 2.0

So basically, you can specify a different color for each vertex, and it will make all the values in-between a neat gradient! You’ll see an example of that for yourself soon.

  1. Every shader begins with a main – just like C!
  2. Sets the destination color for this vertex equal to the source color, and lets OpenGL interpolate the values as explained above.
  3. There’s a built in output variable you have to set in the vertex shader called gl_Position equal to the final position of the vertex. Here was just set it to the original position without changing it at all.

OK, that’s it for our simple vertex shader! Let’s add a simple fragment shader next.

Go to File\New\New File…, choose iOS\Other\Empty, and click Next. Name the new file SimpleFragment.glsl, and click Save.

Open up SimpleFragment.glsl and add the following code:

varying lowp vec4 DestinationColor; // 1void main(void) { // 2gl_FragColor = DestinationColor; // 3
}

This is pretty short and sweet but let’s explain it line by line anyways:

  1. This is the input variable from the vertex shader. It’s the exact same definition as the vetex shader had, except it has an additional keyword: lowp. Turns out when you specify variables in a fragment shader, you need to give it a precision. A good rule of thumb is try to use the lowest precision you can get away with, for a performance bonus. We set it to the lowest precision here, but there’s also medp and highp if you need it.
  2. Just like in a vertex shader, a fragment shader also begins with main.
  3. Just like you need to set gl_Position in the vertex shader, you have to set gl_FragColor in the fragment shader. This simply sets it to the destination color passed in (interpolated by OpenGL).

Not so bad, eh? Now let’s add some code to get start using these in our app.

Compiling Vertex and Fragment Shaders

We’ve added the two shaders to our Xcode project as glsl files, but guess what – Xcode doesn’t do anything with them except copy them into our application bundle.

It’s the job our our app to compile and run these shaders – at runtime!

You might be surprised by this (it’s kinda weird to have an app compiling code on the fly!), but it’s set up this way so that the shader code isn’t dependent on any particular graphics chip, etc.

So let’s write a method we can use to compile these shaders. Open up OpenGLView.m and add the following method above initWithFrame:

- (GLuint)compileShader:(NSString*)shaderName withType:(GLenum)shaderType {// 1NSString* shaderPath = [[NSBundle mainBundle] pathForResource:shaderName ofType:@"glsl"];NSError* error;NSString* shaderString = [NSString stringWithContentsOfFile:shaderPath encoding:NSUTF8StringEncoding error:&error];if (!shaderString) {NSLog(@"Error loading shader: %@", error.localizedDescription);exit(1);}// 2GLuint shaderHandle = glCreateShader(shaderType);    // 3const char * shaderStringUTF8 = [shaderString UTF8String];    int shaderStringLength = [shaderString length];glShaderSource(shaderHandle, 1, &shaderStringUTF8, &shaderStringLength);// 4glCompileShader(shaderHandle);// 5GLint compileSuccess;glGetShaderiv(shaderHandle, GL_COMPILE_STATUS, &compileSuccess);if (compileSuccess == GL_FALSE) {GLchar messages[256];glGetShaderInfoLog(shaderHandle, sizeof(messages), 0, &messages[0]);NSString *messageString = [NSString stringWithUTF8String:messages];NSLog(@"%@", messageString);exit(1);}return shaderHandle;}

OK, let’s go over how this works:

  1. Gets an NSString with the contents of the file. This is regular old UIKit programming, many of you should be used to this kind of stuff already.
  2. Calls glCreateShader to create a OpenGL object to represent the shader. When you call this function you need to pass in a shaderType to indicate whether it’s a fragment or vertex shader. We take ethis as a parameter to this method.
  3. Calls glShaderSource to give OpenGL the source code for this shader. We do some conversion here to convert the source code from an NSString to a C-string.
  4. Finally, calls glCompileShader to compile the shader at runtime!
  5. This can fail – and it will in practice if your GLSL code has errors in it. When it does fail, it’s useful to get some output messages in terms of what went wrong. This code uses glGetShaderiv andglGetShaderInfoLog to output any error messages to the screen (and quit so you can fix the bug!)

You can use this method to compile the vertex and fragmetn shaders, but there’s a few more steps – linking them together, telling OpenGL to actually use the program, and getting some pointers to the attribute slots where you’ll be passing in the input values to the shaders.

Add a new method to do this right below compileShader:

- (void)compileShaders {// 1GLuint vertexShader = [self compileShader:@"SimpleVertex" withType:GL_VERTEX_SHADER];GLuint fragmentShader = [self compileShader:@"SimpleFragment" withType:GL_FRAGMENT_SHADER];// 2GLuint programHandle = glCreateProgram();glAttachShader(programHandle, vertexShader);glAttachShader(programHandle, fragmentShader);glLinkProgram(programHandle);// 3GLint linkSuccess;glGetProgramiv(programHandle, GL_LINK_STATUS, &linkSuccess);if (linkSuccess == GL_FALSE) {GLchar messages[256];glGetProgramInfoLog(programHandle, sizeof(messages), 0, &messages[0]);NSString *messageString = [NSString stringWithUTF8String:messages];NSLog(@"%@", messageString);exit(1);}// 4glUseProgram(programHandle);// 5_positionSlot = glGetAttribLocation(programHandle, "Position");_colorSlot = glGetAttribLocation(programHandle, "SourceColor");glEnableVertexAttribArray(_positionSlot);glEnableVertexAttribArray(_colorSlot);
}

Here’s how each section works:

  1. Uses the method you just wrote to compile the vertex and fragment shaders.
  2. Calls glCreateProgram, glAttachShader, and glLinkProgram to link the vertex and fragment shaders into a complete program.
  3. Calls glGetProgramiv and glGetProgramInfoLog to check and see if there were any link errors, and display the output and quit if so.
  4. Calls glUseProgram to tell OpenGL to actually use this program when given vertex info.
  5. Finally, calls glGetAttribLocation to get a pointer to the input values for the vertex shader, so we can set them in code. Also calls glEnableVertexAttribArray to enable use of these arrays (they are disabled by default).

Two last steps. Add the following method call to initWithFrame right before calling render:

[self compileShaders];

And declare the instance variables for _colorSlot and _positionSlot inside the @interface in OpenGLView.h as follows:

GLuint _positionSlot;
GLuint _colorSlot;

You can compile and run now, and if it displays a green screen (and does not quit with an error), it means that your vertex and fragment shaders compiled OK at runtime!

Of course, nothing looks differnet because we haven’t given the shaders any vertex geometry to render. So let’s take care of that next.

Creating Vertex Data for a Simple Square

Let’s start things nice and simple by rendering a square to the screen. The square will be set up like the following:

Vertices for our square

When you render geometry with OpenGL, keep in mind that it can’t render squares – it can only render triangles. However we can create a square with two triangles as you can see in the picture above: one triangle with vertices (0, 1, 2), and one triangle with vertices (2, 3, 0).

One of the nice things about OpenGL ES 2.0 is you can keep your vertex data organized in whatever manner you like. Open up OpenGLView.m and create a plain old C-structure and a few arrays to keep track of our square information, as shown below:

typedef struct {float Position[3];float Color[4];
} Vertex;const Vertex Vertices[] = {{{1, -1, 0}, {1, 0, 0, 1}},{{1, 1, 0}, {0, 1, 0, 1}},{{-1, 1, 0}, {0, 0, 1, 1}},{{-1, -1, 0}, {0, 0, 0, 1}}
};const GLubyte Indices[] = {0, 1, 2,2, 3, 0
};

So basically we create:

  • a structure to keep track of all our per-vertex information (currently just color and position)
  • an array with all the info for each vertex
  • an array that gives a list of triangles to create, by specifying the 3 vertices that make up each triangle

We now have all the information we need, we just need to pass it to OpenGL!

Creating Vertex Buffer Objects

The best way to send data to OpenGL is through something called Vertex Buffer Objects.

Basically these are OpenGL objects that store buffers of vertex data for you. You use a few function calls to send your data over to OpenGL-land.

There are two types of vertex buffer objects – one to keep track of the per-vertex data (like we have in the Vertices array), and one to keep track of the indices that make up triangles (like we have in the Indices array).

So add a method above initWithFrame to create these:

- (void)setupVBOs {GLuint vertexBuffer;glGenBuffers(1, &vertexBuffer);glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer);glBufferData(GL_ARRAY_BUFFER, sizeof(Vertices), Vertices, GL_STATIC_DRAW);GLuint indexBuffer;glGenBuffers(1, &indexBuffer);glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBuffer);glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(Indices), Indices, GL_STATIC_DRAW);}

You can see that it’s pretty simple. It uses a similar pattern that you’ve seen before – calls glGenBuffers to create a new Vertex Buffer object, glBindBuffer to tell OpenGL “hey when I say GL_ARRAY_BUFFER, I really mean vertexBuffer”, and glBufferData to send the data over to OpenGL-land.

And before we forget, add this to initWithFrame right before calling render:

[self setupVBOs];

Updated Render Code

We have all of the pieces in place, finally we can update the render method to draw our new vertex data to the screen, using our new shaders!

Replace render with the following:

- (void)render {glClearColor(0, 104.0/255.0, 55.0/255.0, 1.0);glClear(GL_COLOR_BUFFER_BIT);// 1glViewport(0, 0, self.frame.size.width, self.frame.size.height);// 2glVertexAttribPointer(_positionSlot, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), 0);glVertexAttribPointer(_colorSlot, 4, GL_FLOAT, GL_FALSE, sizeof(Vertex), (GLvoid*) (sizeof(float) * 3));// 3glDrawElements(GL_TRIANGLES, sizeof(Indices)/sizeof(Indices[0]), GL_UNSIGNED_BYTE, 0);[_context presentRenderbuffer:GL_RENDERBUFFER];
}

This works as follows:

  1. Calls glViewport to set the portion of the UIView to use for rendering. This sets it to the entire window, but if you wanted a smallar part you could change these values.
  2. Calls glVertexAttribPointer to feed the correct values to the two input variables for the vertex shader – the Position and SourceColor attributes.

This is a particularly important function so let’s go over how it works carefully.

  • The first parameter specifies the attribute name to set. We got these earlier when we called glGetAttribLocation.
  • The second parameter specifies how many values are present for each vertex. If you look back up at the Vertex struct, you’ll see that for the position there are three floats (x,y,z) and for the color there are four floats (r,g,b,a).
  • The third parameter specifies the type of each value – which is float for both Position and Color.
  • The fourth parameter is always set to false.
  • The fifth parameter is the size of the stride, which is a fancy way of saying “the size of the data structure containing the per-vertex data”. So we can simply pass in sizeof(Vertex) here to get the compiler to compute it for us.
  • The final parameter is the offset within the structure to find this data. The position data is at the beginning of the structure so we can pass 0 here, the color data is after the Position data (which was 3 floats, so we pass 3 * sizeof(float)).

Ok back to the final part of the code listing!

  1. Calls glDrawElements to make the magic happen! This actually ends up calling your vertex shader for every vertex you pass in, and then the fragment shader on each pixel to display on the screen.

This is also an important function so let’s discuss each parameter here as well.

  • The first parameter specifies the manner of drawing the vertices. There are different options you may come across in other tutorials like GL_LINE_STRIP or GL_TRIANGLE_FAN, but GL_TRIANGLES is the most generically useful (especially when combined with VBOs) so it’s what we cover here.
  • The second parameter is the count of vertices to render. We use a C trick to compute the number of elements in an array here by dividing the sizeof(Indices) (which gives us the size of the array in bytes) by sizeof(Indices[0]) (which gives us the size of the first element in the arary).
  • The third parameter is the data type of each individual index in the Indices array. We’re using an unsigned byte for that so we specify that here.
  • From the documentation, it appears that the final parameter should be a pointer to the indices. But since we’re using VBOs it’s a special case – it will use the indices array we already passed to OpenGL-land in the GL_ELEMENT_ARRAY_BUFFER.

Guess what – you’re done! Compile and run the app and you should see a pretty rectangle on the screen:

A colorful rectangle rendered with OpenGL ES 2.0

You may be wondering why this rectangle happens to fit perfectly on the screen. Well by default, OpenGL has the “camera” at (0,0,0), looking down the z-axis. The bottom left of the screen is mapped to (-1,-1), and the upper right of the screen is mapped to (1,1), so it “stretches” our square to fit the entire screen.

Obviously, in a real app you’ll want to have more control over how the “camera” behaves. So let’s talk about how you can do that with a projection transform!

Adding a Projection

To make objects appear 3D on a 2D screen, we need to apply a projection transform on the objects. Here’s a diagram that shows how this works:

Diagram of a projection transform

Basically we have a “near” plane and a “far plane”, and the objects we want to display are in-between. The closer an object is to the “near” plane we scale it so it looks smaller, and the closer and object is to the “far” plane we scale it so it appears bigger. This mimics the way a human eye works.

Let’s see how we can modify our app to use a projection. Start by opening SimpleVertex.glsl, and make the following changes:

// Add right before the main
uniform mat4 Projection;// Modify gl_Position line as follows
gl_Position = Projection * Position;

Here we add a new input variable called Projection. Notice that instead of marking it as an attribute, we mark it as a uniform. This means that we just pass in a constant value for all vertices, rather than a per-vertex value.

Also note that the Projection is marked as a mat4 type. Mat4 stands for a 4×4 matrix. Matrix math is a big subject that we won’t really cover here, but for now just think of them as things you can use to scale, rotate, or translate vertices. We’ll pass in a matrix that moves our vertices around according to the Projection diagram above.

Next, we set the final position of the vertex to be the Projection multiplied by the Position. The way you apply a matrix transform on something is to multiply like we did here! Order does matter by the way.

Now you need to pass in the Projection matrix to your vertex shader. However, this involves some complicated linear algebra and sadly I have forgotten too much of that since my college days :[ But luckly you don’t need to understand it to use it, since this is well known stuff that many smart people have already solved!

Smart people like Bill Hollings, the author of Cocos3D. He’s written a full-featured 3D graphics framework that integrates nicely with Cocos2D (and maybe I’ll write a tutorial about it sometime!) But anyway, Cocos3D contains a nice Objective-C vector and matrix library that we can pull into this project quite easily.

I’ve put the Cocos3D Math Library files you need into a zip (and removed a few unnecessary dependencies from them), so go ahead and download them and drag the files into your project. Make sure “Copy items into destination group’s folder (if needed)” is checked, and click Finish.

Then add a new instance variable to OpenGLView.h inside the @interface:

GLuint _projectionUniform;

And make the following changes to OpenGLView.m:

// Add to top of file
#import "CC3GLMatrix.h"// Add to bottom of compileShaders
_projectionUniform = glGetUniformLocation(programHandle, "Projection");// Add to render, right before the call to glViewport
CC3GLMatrix *projection = [CC3GLMatrix matrix];
float h = 4.0f * self.frame.size.height / self.frame.size.width;
[projection populateFromFrustumLeft:-2 andRight:2 andBottom:-h/2 andTop:h/2 andNear:4 andFar:10];
glUniformMatrix4fv(_projectionUniform, 1, 0, projection.glMatrix);// Modify vertices so they are within projection near/far planes
const Vertex Vertices[] = {{{1, -1, -7}, {1, 0, 0, 1}},{{1, 1, -7}, {0, 1, 0, 1}},{{-1, 1, -7}, {0, 0, 1, 1}},{{-1, -1, -7}, {0, 0, 0, 1}}
};

Here we import the header for the math library, and call glGetUniformLocation to get the handle we need to set the Projection input variable in the vertex shader.

Then, we use the math library to create a projection matrix. It comes with a nice and easy to use function that lets you specify the coordinates to use for the left/right and top/bottom for the planes, and the near and far z-coordinates.

The way you pass that data to the vertex shader is through glUniformMatrix4fv, and the CC3GLMatrix class has a handy glMatrix method that converts them matrix into the array format OpenGL uses.

The last step is to tweak our vertices a bit so they fit within the near/far planes. This just sets the z-coordinate for each vertex to -7.

Compile and run the app, and now you should see the square looking like it’s in the distance a bit!

Square transformed with projection matrix

Adding Translation and Rotation

It was kind of annoying to have to manually go through our vertex array and move everything backwards by manually changing the z-values to -7.

That kind of thing is what matrix transforms are for! They allow you to move vertices around in space really easily.

So far our vertex shader modifies the positions of our vertices by applying the projection matrix to each vertex, so why not apply a transform/scale/rotation matrix as well? We’ll call this a “model-view” transform.

Let’s see how this works. Make the following changes to SimpleVertex.glsl:

// Add right after the Projection uniform
uniform mat4 Modelview;// Modify the gl_Position line
gl_Position = Projection * Modelview * Position;

You should understand this perfectly by now! We’re just adding another uniform input matrix to pass in, and applying the modelview matrix to the position.

Then add a new instance variable to OpenGLView.h:

GLuint _modelViewUniform;

And make the following changes to OpenGLView.m:

// Add to end of compileShaders
_modelViewUniform = glGetUniformLocation(programHandle, "Modelview");// Add to render, right before call to glViewport
CC3GLMatrix *modelView = [CC3GLMatrix matrix];
[modelView populateFromTranslation:CC3VectorMake(sin(CACurrentMediaTime()), 0, -7)];
glUniformMatrix4fv(_modelViewUniform, 1, 0, modelView.glMatrix);// Revert vertices back to z-value 0
const Vertex Vertices[] = {{{1, -1, 0}, {1, 0, 0, 1}},{{1, 1, 0}, {0, 1, 0, 1}},{{-1, 1, 0}, {0, 0, 1, 1}},{{-1, -1, 0}, {0, 0, 0, 1}}
};

Here we get the reference to the model view uniform input variable, and use the Cocos3D math library to create a new matrix, and populate it with a translation.

The translation is -7 along the z-axis, so it fits within the near/far planes, and then something weird – the sin() of the current time?

If you think back to trig in high school, you might remember that sin() is a function that returns values from -1 to 1, and cycles every PI iterations (3.14). So if we pass the current time in, this will cycle between -1 and 1 every 3.14 seconds.

Compile and run the code, and you’ll see the square appear in the center properly (even though we set the z back to 0):

Square transformed with model-view matrix

However, nothing moves! Well if you think about it, that makes sense because we’re only calling render one time (upon init)!

Let’s fix this by calling the render method every frame.

Rendering and CADisplayLink

Ideally we would like to synchronize the time we render with OpenGL to the rate at which the screen refreshes.

Luckily, Apple provides an easy way for us to do this with CADisplayLink! It’s really easy to use so let’s just dive in. Make the following changes to OpenGLView.m:

// Add new method before init
- (void)setupDisplayLink {CADisplayLink* displayLink = [CADisplayLink displayLinkWithTarget:self selector:@selector(render:)];[displayLink addToRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];    
}// Modify render method to take a parameter
- (void)render:(CADisplayLink*)displayLink {// Remove call to render in initWithFrame and replace it with the following
[self setupDisplayLink];

That’s it! With this CADisplayLink will call your render method every frame, and it will update the transformation based on the sin() of the current time, so the box will move back and forth!

A box moving with a translation

Gratuitous Rotation

I don’t think this is quite cool enough. It would be even cooler if it rotated, then we’d feel like we’re really 3D!

Add another instance variable to OpenGLView.h:

float _currentRotation;

Then add the following to OpenGLView.m, inside the render method, right after the call to populateFromTranslation:

_currentRotation += displayLink.duration * 90;
[modelView rotateBy:CC3VectorMake(_currentRotation, _currentRotation, 0)];

Here we add a variable called _currentRotation, that will increment by 90 degrees every second.

Next, we modify the model view matrix (which is currently a translation) to add a rotation as well. The rotation is along both the x and y axis, and 0 along the z axis.

Compile and run your code, and now you’ll see the square flip with a cool 3D effect!

A box rotating with a model-view transform

Gratuitous 3D Cube

And yet it’s still not cool enough! I’m sick of squares, time to move onto cubes.

It’s really easy, simply comment out the Vertices and Indices arrays and add new ones in their place:

const Vertex Vertices[] = {{{1, -1, 0}, {1, 0, 0, 1}},{{1, 1, 0}, {1, 0, 0, 1}},{{-1, 1, 0}, {0, 1, 0, 1}},{{-1, -1, 0}, {0, 1, 0, 1}},{{1, -1, -1}, {1, 0, 0, 1}},{{1, 1, -1}, {1, 0, 0, 1}},{{-1, 1, -1}, {0, 1, 0, 1}},{{-1, -1, -1}, {0, 1, 0, 1}}
};const GLubyte Indices[] = {// Front0, 1, 2,2, 3, 0,// Back4, 6, 5,4, 7, 6,// Left2, 7, 3,7, 6, 2,// Right0, 4, 1,4, 1, 5,// Top6, 2, 1, 1, 6, 5,// Bottom0, 3, 7,0, 7, 4    
};

I got these vertices just by sketching out a cube on paper and figuring it out. I encourage you to try the same, it’s a good exercise!

Compile and run, and w00t we have a 3D cube… or do we?

3D Cube without Depth

It acts kinda like a cube, but doesn’t look quite right. Sometimes it seems like you can see the inside of the cube!

Luckily we can solve this pretty easily by enabling depth testing in OpenGL. With depth testing, OpenGL can keep track of the z-coordinate for each pixel it wants to draw, and only draw the vertex if there isn’t someting in front of it already.

Let’s add this in real quick. Add a new instance variable to OpenGLView.h:

GLuint _depthRenderBuffer;

Then make the following modifications to OpenGLView.m:

// Add new method right after setupRenderBuffer
- (void)setupDepthBuffer {glGenRenderbuffers(1, &_depthRenderBuffer);glBindRenderbuffer(GL_RENDERBUFFER, _depthRenderBuffer);glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, self.frame.size.width, self.frame.size.height);    
}// Add to end of setupFrameBuffer
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, _depthRenderBuffer);// In the render method, replace the call to glClear with the following
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);// Add to initWithFrame, right before call to setupRenderBuffer
[self setupDepthBuffer];

The setupDepthBuffer method creates a depth buffer, in a similar manner to creating a render/color buffer. However, note that it alloacates the storage using the glRenderbufferStorage method rather than the context’s renderBufferStorage method (which is a special case for the color render buffer used for the OpenGL view).

Then we call glFramebufferRenderbuffer to associate the new depth buffer with the render buffer. Remember how I said that the frame buffer stores lots of different kinds of buffers? Well here’s our first new one to add other than the color buffer.

In the render method, we clear the depth buffer on each update, and enable depth testing.

Compile and run your project, and now you have an app with a rotating cube, using OpenGL ES 2.0, and made completely from scratch! :D

Rotating Cube with OpenGL ES 2.0 Example

Where To Go From Here?

Here is the sample project with all of the code from the above tutorial.

We’ve barely scratched the surface of OpenGL, but hopefully this helps give you guys a good grounding of the basics. Check out the Part 2 in the series, where we take things to the next level by adding some textures to our cube!

By the way, the reason I wrote this tutorial in the first place was because it was the winner of the tutorial poll on the sidebar from a few weeks back! Thank you to everyone who voted, and please be sure to keep voting in the sidebar and suggesting tutorials – we have a new tutorial vote every week!

If you have any questions, comments, or advice for a n00b OpenGL programmer like me, please join the forum discussion below!

转载于:https://www.cnblogs.com/dabaopku/archive/2012/03/19/2406501.html

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/434608.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

java种子填充_种子填充实例运行出问题

种子填充实例运行出问题import java.awt.*;import java.applet.*;import java.awt.image.ImageProducer;import java.awt.image.MemoryImageSource;import java.util.Stack;public class scanseed extends Applet {private static final long serialVersionUID 1L;int redColo…

java幂等性原理_Java接口幂等性设计原理解析

在微服务架构下&#xff0c;我们在完成一个订单流程时经常遇到下面的场景&#xff1a;一个订单创建接口&#xff0c;第一次调用超时了&#xff0c;然后调用方重试了一次在订单创建时&#xff0c;我们需要去扣减库存&#xff0c;这时接口发生了超时&#xff0c;调用方重试了一次…

java web Jersey_使用CXF和Jersey框架来进行Java的WebService编程

CXFCXF是在xfire的基础上实现的。1)首先呢&#xff0c;还是包的问题&#xff0c;在http://cxf.apache.org/download.html这里可以下到最新版的CXF&#xff0c;当然&#xff0c;我用的是最新版的。接下来还是那句废话&#xff0c;建WEB项目&#xff0c;放入JAR包。而JAR包我们就…

S3C2440与SDRAM的地址连线分析

S3C2440有27根地址线ADDR[26:0]&#xff0c;8根片选信号ngcs0-ngcs7,对应bank0-bank7&#xff0c;当访问bankx的地址空间&#xff0c;ngcsx引脚为低电平&#xff0c;选中外设。 2^272^7 * 2^10 * 2^10 128Mbyte 8*128Mbyte 1Gbyte 所以S3C2440总的寻址空间是1Gbyte。 市面…

java方法有excel实现_Java实现EXCEL操作(1)

Java实现EXCEL操作(1)1、实现方法&#xff1a;现在有三种方法去实现&#xff1a;jxl 、poi 、 FastExcel&#xff1a;97~2003在这里只讲poi实现方法。poi的包可以去Apache官网上去下载&#xff1a;http://poi.apache.org/download.html2、poi实现【1】低版本的导入导出方法&…

maven生成javadoc【原创】

1.命令模式&#xff1a; mvn javadoc:javadoc 2.eclipse下&#xff1a; 转载于:https://www.cnblogs.com/caiyuanzai/archive/2012/03/30/2425780.html

S3C2440_MMU

MMU,全称Memory Manage Unit, 中文名——存储器管理单元。 许多年以前&#xff0c;当人们还在使用DOS或是更古老的操作系统的时候&#xff0c;计算机的内存还非常小&#xff0c;一般都是以K为单位进行计算&#xff0c;相应的&#xff0c;当时的程序规模也不大&#xff0c;所以 …

某单位会java_Java核心API -- 4(日期类)

1. Date类(Java.utilDate)java.util.Date类用于封装日期及时间信息&#xff0c;一般仅用它显示某个日期&#xff0c;不对他作任何操作处理&#xff0c;作处理用Calendar类&#xff0c;计算方便。//创建一个Date实例&#xff0c;默认的构造方法创建的日期代表当前系统时间Date d…

java阴阳师抽卡概率_《阴阳师》公布抽卡概率!看到数字我哭了

随着《文化部关于规范网络游戏运营加强事中事后监管工作的通知》(以下简称“通知”)的正式生效&#xff0c;网游与手游似乎也迎来了一个全新的时代&#xff0c;除了我们之前关注的游戏帐号实名制认证之外&#xff0c;道具的合成以及氪金抽卡概率问题也非常值得玩家注意&#xf…

JAVA中返回值为字母时_LeetCode#524通过删除字母匹配到字典里最长单词-java中CompareTo方法用法以及Comparator中Compare方法返回值...

import java.util.Collections;import java.util.Comparator;import java.util.List;/*524. 通过删除字母匹配到字典里最长单词给定一个字符串和一个字符串字典&#xff0c;找到字典里面最长的字符串&#xff0c;该字符串可以通过删除给定字符串的某些字符来得到。如果答案不止…

关于JS中的constructor与prototype

在学习JS的面向对象过程中&#xff0c;一直对constructor与prototype感到很迷惑&#xff0c;看了一些博客与书籍&#xff0c;觉得自己弄明白了&#xff0c;现在记录如下&#xff1a; 我们都知道&#xff0c;在JS中有一个function的东西。一般人们叫它函数。比如下面的代码&…

resolv.conf

文件/etc/resolv.conf配置DNS客户&#xff0c;它包含了主机的域名搜索顺序和DNS服务器的地址&#xff0c;每一行应包含一个关键字和一个或多个的由空格隔开的参数。下面是一个例子文件&#xff1a; search mydom.edu.cnnameserver 210.34.0.14nameserver 210.34.0.2合法的参数及…

C# 温故而知新:Stream篇(六)

C# 温故而知新&#xff1a;Stream篇&#xff08;六&#xff09; BufferedStream 目录&#xff1a; 简单介绍一下BufferedStream如何理解缓冲区&#xff1f;BufferedStream的优势从BufferedStream 中学习装饰模式    如何理解装饰模式    再次理解下装饰模式在Stream中的…

HDU 3306 Another kind of Fibonacci

题意&#xff1a;A(0) 1 , A(1) 1 , A(N) X * A(N - 1) Y * A(N - 2) (N > 2)&#xff1b;给定三个值N&#xff0c;X&#xff0c;Y求S(N):S(N) A(0)2 A(1)2……A(n)2。 思路&#xff1a;原来我们讲的斐波那契数列是&#xff1a; F(0) 1, F(1) 1, F(N) F(N - 1) F(N…

Arm Linux交叉编译和连接过程分析(1)

一、配置内核&#xff08;Kconfig&#xff09; 我们配置内核是实质是根据众多目录下面的Kconfig文件中组合成我们需要的一个最佳选择&#xff0c;即最终在根目录下面生成的.config文件&#xff0c;而这个文件会在根目录Makefile下调用的。这一部分我们主要讨论整个SEP4020体系…

Arm Linux交叉编译和连接过程分析(2)

二、编译内核镜像过程 1、编译过程中涉及到到文件&#xff1a; /Makefile 编译产生顶层vmlinux镜像文件/scripts/Kbuild.include make过程中到一些基本定义 /scripts/Makefile.lib 编译内核时用到到函数库文件 /scripts/Makefile.build 内核编译到相关命令文件…

sql server 2008学习3 表组织和索引组织

表组织 表包含在一个或多个分区中&#xff0c;每个分区在一个堆或一个聚集索引结构包含数据行。堆页或聚集索引页在一个或多个分配单元中进行管理&#xff0c;具体的分配单元数取决于数据行中的列类型。 聚集表、堆和索引 SQL Server 表使用下列两种方法之一来组织其分区中的数…

Oracle备份standby,Oracle 11g 利用泠备份恢复standby库

Oracle 11g 利用泠备份恢复standby库1 开始在备库上进行泠备份先查好控制文件、redo、undo文件、数据文件的路径1.1 先关闭主库的归档日志传输SQL> ALTER system SETlog_archive_dest_state_2 DEFER;System altered.SQL>1.2 先关闭standby库SQL> shutdown immediate;D…

matlab单位阶跃响应与单位脉冲响应,python 已知响应函数求单位阶跃响应或脉冲响应...

最近学习自动控制原理&#xff0c;关于控制系统的一些&#xff0c;老师用布置了一些作业说要用matlab画&#xff0c;我试试python首先介绍一下所使用的库&#xff1a;control matplotlib sympy1.control库&#xff1a;用来计算脉冲响应与阶跃响应Paste_Image.png2.sympy&#x…

$_server['php_self'] 漏洞,Discuz! $_SERVER['PHP_SELF'] XSS Vulnerability

在common.inc.php文件的69行&#xff1a;$PHP_SELF $_SERVER[PHP_SELF] ? $_SERVER[PHP_SELF] : $_SERVER[SCRIPT_NAME];$SCRIPT_FILENAME str_replace(\\\\, /, (isset($_SERVER[PATH_TRANSLATED]) ? $_SERVER[PATH_TRANSLATED] : $_SERVER[SCRIPT_FILENAME]));$boardurl …