The previous article introduced OpenGL in Android, including coordinate mapping. In the OpenGL ES environment, the displayed drawing objects are closer to what the eyes see through projection and camera views. This presentation is achieved by mathematically transforming the coordinates of the drawing objects. This article introduces the knowledge related to projection and camera views. The code examples in this article can be referred to the previous article:
The main content is as follows:
- Projection types
- Defining projection
- Defining camera views
- Applying projection and camera views
- Running effect
Projection types#
There are mainly two projection modes in OpenGL, namely orthographic projection and perspective projection. Their characteristics are as follows:
- Perspective projection: Presents the effect of objects appearing larger when they are closer and smaller when they are farther away, which is consistent with human visual perception.
- Orthographic projection: Keeps the size of objects the same on the projection plane.
The viewing volume of perspective projection is a frustum, while the viewing volume of orthographic projection is a cuboid. The diagrams below illustrate perspective projection and orthographic projection:
The corresponding matrix calculation functions for perspective projection and orthographic projection are as follows:
// Perspective projection matrix
Matrix.frustumM(float[] m, int offset, float left, float right, float bottom, float top, float near, float far);
// Orthographic projection matrix
Matrix.orthoM(float[] m, int offset, float left, float right, float bottom, float top, float near, float far);
In the above functions, the parameter m
is used to store the corresponding projection matrix data, and near
and far
represent the distance from the near plane and the far screen of the viewing volume, while left
, right
, top
, and bottom
correspond to the parameters of the far plane.
Defining projection#
Based on the content of the previous section, perspective projection is used here. The projection matrix is filled using Matrix.frustumM()
, as shown below:
private val projectionMatrix = FloatArray(16)
override fun onSurfaceChanged(unused: GL10, width: Int, height: Int) {
GLES20.glViewport(0, 0, width, height)
val ratio: Float = width.toFloat() / height.toFloat()
Matrix.frustumM(projectionMatrix, 0, -ratio, ratio, -1f, 1f, 3f, 7f)
}
The above code fills a projection matrix projectionMatrix
, and its changes are shown in the following animation:
Defining camera views#
As the name suggests, camera views are like observing an object from the perspective of a camera. The Matrix.setLookAtM
method is used to fill the view matrix, and the key parameters are the camera position, target position, and the vector representing the camera's upward direction. Then, the projection matrix and view matrix are combined into vPMatrix
, as shown below:
override fun onDrawFrame(gl: GL10?) {
// Draw the current frame for rendering specific content
Log.d(tag, "onDrawFrame")
// Set the camera position (view matrix)
Matrix.setLookAtM(viewMatrix,0,
0.0f,0.0f,5.0f, // Camera position
0.0f,0.0f,0.0f, // Target position
0.0f,1.0f,0.0f) // Upward direction vector of the camera
// Calculate the projection and view transformation
Matrix.multiplyMM(vPMatrix,0,projectionMatrix,0,viewMatrix,0)
// Perform specific drawing
triangle.draw(vPMatrix)
}
In the example above, the z-coordinate of the camera position can only be between near
and far
, which means it must be between 3 and 7. It cannot be observed outside this range. The animation below shows this:
Applying projection and camera views#
To adapt to the projection and view transformation, modify the shader code in the previous article as follows:
// default
attribute vec4 vPosition;
void main() {
gl_Position = vPosition;
}
// Applying projection and view transformation
uniform mat4 uMVPMatrix;
attribute vec4 vPosition;
void main() {
gl_Position = uMVPMatrix * vPosition;
}
Simply pass the vPMatrix
matrix calculated in the previous section to the shader:
fun draw(mvpMatrix: FloatArray) {
// Get the address index of the attribute variable
// get handle to vertex shader's vPosition member
positionHandle = GLES20.glGetAttribLocation(programHandle, "vPosition").also {
// Enable vertex attribute, it is disabled by default
GLES20.glEnableVertexAttribArray(it)
GLES20.glVertexAttribPointer(
it,
COORDINATE_PER_VERTEX,
GLES20.GL_FLOAT,
false,
vertexStride,
vertexBuffer
)
}
// get handle to fragment shader's vColor member
colorHandler = GLES20.glGetUniformLocation(programHandle, "vColor").also {
GLES20.glUniform4fv(it, 1, color, 0)
}
// get handle to shape's transformation matrix
vPMatrixHandle = GLES20.glGetUniformLocation(programHandle, "uMVPMatrix")
// Pass the projection and view transformation to the shader
GLES20.glUniformMatrix4fv(vPMatrixHandle, 1, false, mvpMatrix, 0)
// Draw the triangle
GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, vertexCount)
GLES20.glDisableVertexAttribArray(positionHandle)
}
By applying the projection and camera views through code modification, the distortion problem caused by switching between landscape and portrait orientations is solved. This kind of distortion can naturally be extended to other areas, such as the aspect ratio of video rendering in OpenGL.
Running effect#
You can compare the running effect with the one in the previous article here. The running effect is as follows:
You can reply with the keyword "OpenGL" to get the source code. To obtain the program of the animations mentioned above, reply with the keyword "OTUTORS".