Friday, April 5, 2013

OpenGL 1.0 - Part 1 - Simple Android Games

Introduction
We want to speed up awesomeguy so that game play will be more enjoyable. We're not going to address other aspects of gameplay that might make the game less enjoyable. We are not, for example, going to change the theme of the game, or the layout of any particular level. We just want to look at the speed.

A version of the game was written without JNI. No C code was used. This version was slow since all the drawing of sprites and tiles had to be done by java. Java doesn't draw fast. Java Native Interface, the C code that we wrote, speeds up that drawing particularly. Before game play each of the sprites and all of the tiles are loaded into memory that the C code maintains and has access to. During the course of the game the JNI composes the image that is to be displayed on the screen. This happens faster than it would if solely java was responsible for the drawing. Then the C code passes the composed image back to the java code, where java methods are responsible for putting it on the screen. This method, then, became the bottle neck. This method took a lot of time.

OpenGL
What we want to do instead is to use OpenGL to show the user the composed image -- the game screen -- and avoid the two method calls that slow the game down. We want to avoid the call that transfers the array of pixels from the JNI code to the java code, and we want to avoid the call to the java method to print the array from the JNI on the screen. We feel we can do this with OpenGL. Most Android phones have some version of OpenGL, and it is implemented in hardware. It is designed to draw 3D graphics to the screen. It also is accessable to the C programmer, and it works independently of the processor that the user interacts with on the user interface thread.

We're not going to discuss how OpenGL really does the drawing that we are doing on the screen. What we want to do here is give a recipe for drawing to the screen, not really how the phone does that drawing. It is beyond the scope of this tutorial to show how depth is drawn by the OpenGL system, for example. Links on this page will show the reader more in this regard than this tutorial.

OpenGL ES Versions
There are several versions of OpenGL for the android phone, version 1.0, 1.1 and 2.0. We'll focus on version 1.0 It does everything we need and it's available on most phones, while version 2.0 is not available on as many. Version 1.0 is also good for us because it's less complicated and you can get programs written with version 1.0 to run on the Android emulator. This is something you cannot do with version 2.0, which has to have all code written for it tested on an Android device. This makes development with OpenGL version 1.0 easier to test. Our needs are simple. I believe that for complex tasks version 2.0 is faster, but we don't need its complexity.

For some background on OpenGL see this link: http://developer.android.com/guide/topics/graphics/opengl.html . Another good page for those who want to jump ahead into OpenGL in the Android in Java is here: http://www.jayway.com/2009/12/03/opengl-es-tutorial-for-android-part-i/ . This second site is used extensively.

Java Example
What we'll do here is explain how to set up OpenGL to do this. We'll even touch on setting up OpenGL in Java. This may be helpful on another project. I have also found it helpful to set up OpenGL in java, and then once it is working, then I know that my Android settings are right. Then I copy each of the methods to the JNI environment. I've found this is good for debugging the various Android settings that are required before you can 'turn on' OpenGL.

To start off lets look at the Android Manifest file. You must add this line:

 < uses-feature android:glEsVersion="0x00010000" android:required="true" /> 
This declaration tells the phone that you will be using OpenGL ES version 1.0. Of course in your actual manifest file the above text must be enclosed in angle brackets in the style of regular XML.

Now we need several classes that will be used for displaying our OpenGL content. This is fairly well documented. We need a 'GLSurfaceView' and a 'GLSurfaceView.Renderer'. A 'GLSurfaceView' is a View. To create one you 'extend' in java. A 'GLSurfaceView.Renderer' is a interface. To create one you 'implement' in java. The GLSurfaceView code is fairly simple. Most of the important stuff there is already part of the class that you are extending. The 'Renderer' code is more complex in that there are several methods that must be implemented. We use eclipse, and eclipse will create method stubs for unimplemented methods for us.

Here is an example of a GLSurfaceView. The GLSurfaceView is named 'PanelGLSurfaceView' for simplicity. You can see that the Renderer will be named 'Panel'.

import android.content.Context;
import android.opengl.GLSurfaceView;

public class PanelGLSurfaceView extends GLSurfaceView {
    Panel mPanel; // this is our renderer!!

    public PanelGLSurfaceView(Context context) {
        super(context);
        mPanel = new Panel(context);
 
        setRenderer(mPanel);

    }

}


Here is an example of what eclipse gives you for the 'GLSurfaceView.Renderer' interface. This is where the actual drawing goes on. The 'GLSurfaceView' does not now contain much. It's a view and can be included in layouts the way a view can. It's also a place to put code that responds to touching from the screen, but we're not going to detect touching from the screen in this example.

import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.opengles.GL10;

import android.opengl.GLSurfaceView;

public class Panel implements GLSurfaceView.Renderer {

 public Panel(Context context) {
  // TODO Auto-generated constructor stub
 }

 @Override
 public void onDrawFrame(GL10 gl) {
  // TODO Auto-generated method stub
  
 }

 @Override
 public void onSurfaceChanged(GL10 gl, int width, int height) {
  // TODO Auto-generated method stub
  
 }

 @Override
 public void onSurfaceCreated(GL10 gl, EGLConfig config) {
  // TODO Auto-generated method stub
  
 }

}


Note: this code is not particularly useful at this stage because the overidden methods do not do anything. This is, though, the framework for what we do later on. The important methods are 'onDrawFrame()', 'onSurfaceChanged()', and 'onSurfaceCreated()'. Note that each of these methods has 'GL10 gl' passed to it. This is so that later when we need constants and enumerations for OpenGL, we can use the 'gl' variable to get at them in each of our methods.

Activity Class
The first thing we're going to do is get OpenGL working in Java. Then we'll proceed to implement it in JNI in C. At the start we want to make sure that the GLSurfaceView is included in another view that the user sees. For this example we'll add the GLSurfaceView to a Activity programmatically. There is a way to include the GLSurfaceView in a xml layout, but we're not going to explore that. This is an activity that will show our GLSurfaceView.

import android.os.Bundle;
import android.app.Activity;
import android.view.Menu;

public class MainActivity extends Activity {

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        PanelGLSurfaceView mSurfaceView = new PanelGLSurfaceView(this);
        
        //setContentView(R.layout.activity_main);
        setContentView(mSurfaceView);
        
    }

}


Setting Screen Color
This is similar to how we would show our surface view with the actual game. Now, back in the renderer we make some changes. What we want to do is show that OpenGL is correctly configured for the app. Here's a modified version of the 'onDrawFrame()' method.

    @Override
    public void onDrawFrame(GL10 gl) {
        // these are two enumerations 'or'-ed together.
        gl.glClear(GL10.GL_COLOR_BUFFER_BIT |  GL10.GL_DEPTH_BUFFER_BIT);
    }
And here we have a modified version of the 'onSurfaceCreated()' method.
    @Override
    public void onSurfaceCreated(GL10 gl, EGLConfig config) {
        // This is the background color. RGBA
        gl.glClearColor(0.5f, 0.0f, 0.0f, 0.5f);  
        // Smooth Shading
        gl.glShadeModel(GL10.GL_SMOOTH);
        // This is a setting that effects how depth is shown
        gl.glClearDepthf(1.0f);
        
        // More cryptic settings that effect how depth is shown
        gl.glEnable(GL10.GL_DEPTH_TEST);
        gl.glDepthFunc(GL10.GL_LEQUAL);
        gl.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT, GL10.GL_NICEST);
    }


The 'onSurfaceCreated()' method is longer. Here the color that we want to paint all over the screen is red. The 'gl.glClearColor()' method takes 'red', 'green', 'blue', and 'alpha' values as it's parameters. With just these two methods implemented in this particular way we can tell if the app is configured properly. Here when we run the app there would be a red screen from this activity. This is does not explain what is happening in the phone to display the image. This explanation is not covered in this tutorial. If this code is compiled as shown here the activity in question will show a red screen. This is OpenGL displaying this screen and the Java that we've used to instruct the OpenGL to do this is mostly just passing information to the GL processor, and so the graphics here appear very fast, though you cannot tell at this time that they are any faster than any other means of display.

What we'll do next is set up a texture and display that texture.


Start Showing An Object
Here we start adding code that will later show up as essential to the JNI code that we are going to write. In the next code snippet we fill in part of the 'onSurfaceChanged()' method. We will be echoing this code in C later on.

    @Override
    public void onSurfaceChanged(GL10 gl, int width, int height) {
        
        // This is the viewport.
        gl.glViewport(0, 0, width, height);
        
        // these two lines clear the 'projection' matrix
        gl.glMatrixMode(GL10.GL_PROJECTION);
        gl.glLoadIdentity();
        
        // this line requires more import statements!!
        GLU.gluPerspective(gl, 45.0f, (float) width / (float) height, 0.1f, 100.0f);
        
        // these two lines clear the 'modelview' matrix
        gl.glMatrixMode(GL10.GL_MODELVIEW);
        gl.glLoadIdentity();
    }


Vertices
If you want to learn about 3D images, this might be a good place to start. Now we're going to define in the 'onDrawFrame()' method a set of vertices. These vertices form a wire-like frame which we'll later stretch a image over. This process is essentially what we'll be doing in JNI. The wire frame is in a float array called 'vertices'. Floats are floating point numbers, and they have a certian size. They also have a sign, which means they can be negative or posotive. The ones we'll use will define a square. The first float will be in the upper left hand corner and have coordinates of (-1.0, 1.0, 0.0). The second vertices is (1.0, 1.0, 0.0). The vertices are in the form (x, y, z) where z is always zero because all the points reside on the same plane at the z axis. This is not necessarily the case for all objects. The last two vertices are (-1.0, -1.0, 0.0) and (1.0, -1.0, 0.0). These four points will make the surface upon which we will display our game screen.

For Java we need to convert the array of vertices into ByteBuffers for OpenGL to use them. This is not as much of a restriction in C as it turns out. In C the vertices are sufficiently primitive to be passed to the hardware as-is. This is the modified 'onDrawFrame()' method.

    @Override
    public void onDrawFrame(GL10 gl) {
        // these are two enumerations 'or'-ed together.
        gl.glClear(GL10.GL_COLOR_BUFFER_BIT |  GL10.GL_DEPTH_BUFFER_BIT);
        
        float vertices[] = {
                  -1.0f,  1.0f, 0.0f,  
                  -1.0f, -1.0f, 0.0f,  
                   1.0f, -1.0f, 0.0f,  
                   1.0f,  1.0f, 0.0f,  
        };

        
        ByteBuffer bb = ByteBuffer.allocateDirect(vertices.length * 4);
        bb.order(ByteOrder.nativeOrder());
        FloatBuffer vertexBuffer = bb.asFloatBuffer();
        vertexBuffer.put(vertices);
        vertexBuffer.position(0);
    }


Each float is four bytes long.

Indices
Next we worry about indices. We use indices to define what order to view the vertices in. When we draw the vertices as a solid we use triangles made up of the vertices. These triangles are defined by the indices. You can tell the OpenGL code what order to view the triangles in by setting the winding. Winding can be clockwise or counter-clockwise. Some experimentation can be helpful here.

A plane, the basis for the 3D objects that OpenGL displays, can easily be defined using three points. Three points makes a triangle. What we do is define a whole set of points, and then define the triangles that make it up by establishing the order in which to consider them. The vertices are the mass of points and the indices allow us to show what order to follow to make triangles out of the points. You can even repeat vertices in the list of indices in order to clearly define out surfaces. What you must do, though, is to make on long list of indices that represents all the faces that you are interested in. The computer knows which side is 'out' by your choice of clockwise or counter-clockwise winding. We have four points (vertices) and they are arranged in two triangles that are side by side. We use a table of six indices to define our two triangles. Like the vertices, they need to be copied into ByteBuffers for Java to pass them to the OpenGL hardware.

For indices we'll use shorts. A short is sixteen bits. In the next code snippet we introduce enough code for a square to be drawn on the screen. We'll go over some of the code in the explanation afterword.

    @Override
    public void onDrawFrame(GL10 gl) {
        // these are two enumerations 'or'-ed together.
        gl.glClear(GL10.GL_COLOR_BUFFER_BIT |  GL10.GL_DEPTH_BUFFER_BIT);
        
        gl.glLoadIdentity(); 
        gl.glTranslatef(0, 0, -4); 
        
        float vertices[] = {
                  -1.0f,  1.0f, 0.0f,  
                  -1.0f, -1.0f, 0.0f,  
                   1.0f, -1.0f, 0.0f,  
                   1.0f,  1.0f, 0.0f,  
        };

        
        ByteBuffer bb = ByteBuffer.allocateDirect(vertices.length * 4);
        bb.order(ByteOrder.nativeOrder());
        FloatBuffer vertexBuffer = bb.asFloatBuffer();
        vertexBuffer.put(vertices);
        vertexBuffer.position(0);
        
        short[] indices = { 0, 1, 2, 0, 2, 3 };
        
        ShortBuffer indexBuffer;
        
        ByteBuffer ibb = ByteBuffer.allocateDirect(indices.length * 2);
        ibb.order(ByteOrder.nativeOrder());
        indexBuffer = ibb.asShortBuffer();
        indexBuffer.put(indices);
        indexBuffer.position(0);
        
        // CCW stands for 'counter-clockwise'
        gl.glFrontFace(GL10.GL_CCW); 
        
        gl.glEnable(GL10.GL_CULL_FACE); 
        gl.glCullFace(GL10.GL_BACK); 
        
        // name your vertex buffer
        gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
        gl.glVertexPointer(3, GL10.GL_FLOAT, 0,  vertexBuffer);

        // name your index buffer
        gl.glDrawElements(GL10.GL_TRIANGLES, indices.length, GL10.GL_UNSIGNED_SHORT, indexBuffer);
        // disable things when you're done using them
        gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); 
        gl.glDisable(GL10.GL_CULL_FACE); 
        
        
    }


Showing The Square
To see the square we have to execute the two commands near the top of the listing. 'glLoadIdentity()' and 'glTranslatef()'. The 'load-identity' command puts the camera -- the imaginary camera responsible for perspective in this code -- in the starting position every time 'onDrawFrame()' is called. This method is called over and over again and is responsible for animation in 3D. The 'gl-translate' command moves the scene the camera is viewing in the negative-z direction. Without this command the square is at the exact same position that the camera is and therefore cannot be seen. You can try the code without these two lines to see how it works.



Picking A Texture
Textures can be the source of lots of problems with OpenGL. You have to use a square image, a ping, that has sides that are powers of 2. Here we used 128x128. Sometimes textures will even work on the emulator and not on the phone. You should test out your app on both ultimately. Below is our texture. To use it we made a folder called 'drawable' in the 'res' folder in our android project. We put the file, called 'texture.png', there. After refreshing the project in eclipse and building it once we can access the file.



Texture Coordinates
Texture coordinates are called 'u-v' coordinates. They are set up on a coordinate system like the x/y coordinates in an x/y plane. The only reason we don't call them 'x-y' coordinates is that the variables x,y, and z are already used. We use them in our vertices descriptions. U is like x, and refers to the horizontal component of the coordinate. V is like y, and refers to the vertical component of the coordinate. UV coordinates act as if they were xy coordinates in the first quadrant. This means that 0,0 is in the lower left. 1,1 refers to the upper right of the texture. 1,0 refers to the lower right and 0,1 rerers to the upper left. We specify one texture coordinate for each of our vertices.

Since UV coordinates are floats, you can specify values greater and less than 1. That 1 describes the edge of the texture, but you can stretch the texture into different shapes as it is applied to the vertices by specifying different values. Again, experimentation is useful here. When we write our C version of this code we will use this property to only display part of the texture on the screen. All textures are square and our game screen is not square.

Code Listing
The code for painting the texture on our square goes BEFORE the code to simply draw the shape on the screen. Our listing will have the 'onDrawFrame()' as before but the texture code will come first.

    @Override
    public void onDrawFrame(GL10 gl) {
        
        // start texture code... load bitmap
        Bitmap bitmap = BitmapFactory.decodeResource(mContext.getResources(), R.drawable.texture);
        
        // just one texture
        int[] texturesArray = new int[1];
        gl.glGenTextures(1, texturesArray, 0);
        
        // what to do if texture must be shrunk or magnified
        gl.glBindTexture(GL10.GL_TEXTURE_2D, texturesArray[0]);
        gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_NEAREST);
        gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_NEAREST);
        
        float textureCoord[] = {
                0.0f, 0.0f,
                0.0f, 1.0f,
                1.0f, 1.0f,
                1.0f, 0.0f };
        
        // what to do at edge of texture (repeat in two directions)
        gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_CLAMP_TO_EDGE);
        gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_REPEAT);
        
        GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0);
        
        FloatBuffer textureBuffer;        
        ByteBuffer byteBuf = ByteBuffer.allocateDirect(textureCoord.length * 4);
        byteBuf.order(ByteOrder.nativeOrder());
        textureBuffer = byteBuf.asFloatBuffer();
        textureBuffer.put(textureCoord);
        textureBuffer.position(0);
        
        // set texture for later drawing
        gl.glEnable(GL10.GL_TEXTURE_2D);
        gl.glBindTexture(GL10.GL_TEXTURE_2D, texturesArray[0]);
        gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
        gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer);
      
        // draw everything here that was at the beginning of the method

        gl.glClear(GL10.GL_COLOR_BUFFER_BIT |  GL10.GL_DEPTH_BUFFER_BIT);

        
        gl.glLoadIdentity(); 
        gl.glTranslatef(0, 0, -4); 
        
        float vertices[] = {
                  -1.0f,  1.0f, 0.0f,  
                  -1.0f, -1.0f, 0.0f,  
                   1.0f, -1.0f, 0.0f,  
                   1.0f,  1.0f, 0.0f,  
        };

        
        ByteBuffer bb = ByteBuffer.allocateDirect(vertices.length * 4);
        bb.order(ByteOrder.nativeOrder());
        FloatBuffer vertexBuffer = bb.asFloatBuffer();
        vertexBuffer.put(vertices);
        vertexBuffer.position(0);
        
        short[] indices = { 0, 1, 2, 0, 2, 3 };
        
        ShortBuffer indexBuffer;
        ByteBuffer ibb = ByteBuffer.allocateDirect(indices.length * 2);
        ibb.order(ByteOrder.nativeOrder());
        indexBuffer = ibb.asShortBuffer();
        indexBuffer.put(indices);
        indexBuffer.position(0);
        
        // CCW stands for 'counter-clockwise'
        gl.glFrontFace(GL10.GL_CCW); 
        
        gl.glEnable(GL10.GL_CULL_FACE); 
        gl.glCullFace(GL10.GL_BACK); 
        
        // name your vertex buffer
        gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
        gl.glVertexPointer(3, GL10.GL_FLOAT, 0,  vertexBuffer);

        // name your index buffer
        gl.glDrawElements(GL10.GL_TRIANGLES, indices.length, GL10.GL_UNSIGNED_SHORT, indexBuffer);
        
        // disable things when you're done using them
        gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); 
        gl.glDisable(GL10.GL_CULL_FACE); 
        gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
        gl.glDisable(GL10.GL_TEXTURE_2D);
    }


Some explanations follow.



This is a rather lengthy listing, but it shows that OpenGL ES can be used in Java for the Android phone. We have left out description of much of the code above. Some things deserve special mention. One thing is that the floats that make up the texture coordinates only have two dimensions. There are only U and V (no W). Also, these UV coordinates are converted to ByteBuffers so that Java can pass the values to the OpenGL engine.

The texture coordinate code is included before the code to actually draw the vertices. Also, most of the coordinates we see are based on the cartesian system, and so zero is in the center of the picture, or as in the case of texture coordinates, the lower left. There is only one texture coordinate for each vertices.

Java Import Listing
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.FloatBuffer;
import java.nio.ShortBuffer;

import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.opengles.GL10;

import android.content.Context;
import android.graphics.Bitmap;
import android.graphics.BitmapFactory;
import android.opengl.GLSurfaceView;
import android.opengl.GLU;
import android.opengl.GLUtils;
This is the list of imports for the Panel.java class.

Moving On
Next we'll try to do the same thing that we've accomplished here in C and connect it to the Android App with JNI.

No comments:

Post a Comment