0% found this document useful (0 votes)
40 views86 pages

Multimedia Programming and Mobile Devices (Teaching Material)

Module 8 focuses on multimedia programming and mobile devices, covering the development of applications for mobile devices, multimedia programming, and game development. It discusses technologies, limitations, integrated work environments, and tools for mobile application development, with a particular emphasis on Android Studio and Eclipse as IDEs. The module also addresses the application life cycle, emulators, and the various components necessary for creating mobile applications and games.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views86 pages

Multimedia Programming and Mobile Devices (Teaching Material)

Module 8 focuses on multimedia programming and mobile devices, covering the development of applications for mobile devices, multimedia programming, and game development. It discusses technologies, limitations, integrated work environments, and tools for mobile application development, with a particular emphasis on Android Studio and Eclipse as IDEs. The module also addresses the application life cycle, emulators, and the various components necessary for creating mobile applications and games.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

ILERNAI is

online

Module 8
Multimedia programming
and mobile devices

on updatePhotoDescipiomO\
(desctptionsNength >
(page • document
.getlemenWByiK

elemented
= fc
elementld
Module 8: Multimedia programming and mobile devices

UF1: DEVELOPMENT OF APPLICATIONS FOR MOBILE DEVICES.........5


1. Analysis of technologies for applications on mobile devices.................5
1.1. Limitations of running applications on mobile devices...................5
1.2. Integrated work environments.........................................................7
$ eclipse.........................................................................................................................7
1.3. Modules for mobile application development.................................8
1.4. Emulators.........................................................................................9
1.5. Settings.............................................................................................9
1.6. Profiles...........................................................................................10
1.7. Application life cycle.....................................................................12
1.8. Modifying existing applications.....................................................17
1.9. Using Application Manager Runtime Environments.....................18
2. Mobile device programming........................................................................19
2.1. Tools and construction phases.......................................................19
2.2. User interfaces. Associated classes................................................21
( \..............................................................................................................21
\/....................................................................................................................21
2.3. Graphic context. Images................................................................24
2.4. Events.............................................................................................27
2.5. Animation and sound techniques...................................................28
/ \..................................................................................................................28
Yo )..............................................................................................................28
2.6. Service Discovery..........................................................................34
( \..............................................................................................................34
2.7. Databases and storage....................................................................36
2.8. Persistence......................................................................................38
2.9. Thread model.................................................................................46
2.10. Communications. Associated classes. Types of connections.........49
2.11. Wireless communication management..........................................52
2.12. Sending and receiving text messages. Security and permissions...52
2.13. Sending and receiving multimedia messages. Content
synchronization. Security and permissions...................................................54
2.14. Handling HTTP and HTTPS connections......................................55
( TO...........................................................................................................55
( h.....................................................................................................................56
UF2: MULTIMEDIA PROGRAMMING...........................................................57
1. Using integrated multimedia libraries...................................................57

Page 88
Module 8: Multimedia programming and mobile devices

1.1. Multimedia Applications Concepts................................................57


1.2. Architecture of the API used..........................................................58
J.....................................................................................................................60
1.3. Multimedia data sources. Classes..................................................61
/ TO........................................................................................................61
1.4. Time-based data.............................................................................63
1.5. Multimedia object processing. Classes. States, methods and events
66
r TO....................................................................................................................67
1.6. Playing multimedia objects. Classes. States, methods and events.68
UF3: DEVELOPMENT OF GAMES FOR MOBILE DEVICES......................71
1. Game Engine Analysis..........................................................................71
1.1. Animation concepts........................................................................71
/ TO.....................................................................................................................71
1.2. Game Architecture: Components..........................................................72
1.3. Game Engines: Types and Uses............................................................73
1.4. Areas of specialization, libraries used and programming languages
75
1.5. Components of a game engine.......................................................76
1.6. Libraries that provide the basic functions of a 2D/3D engine.......78
1.7. 3D graphic API..............................................................................78
1.8. Study of existing games.................................................................79
1.9. Applying modifications to existing games.....................................80
2. Development of 2D and 3D games..............................................................81
2.1. Development environments for games..........................................81
2.2. Integrating the game engine into development environments.......82
2.3. Advanced 3D programming concepts............................................84
2.4. Development phases......................................................................85
2.5. Object properties: light, textures, reflection and shadows.............86
2.6. Application of the graphics engine functions. Rendering..............88
2.7. Application of scene graph functions. Types of nodes and their use
89
2.8. Execution analysis. Code optimization..........................................90
Literature..............................................................................................................91
Webgraphy...........................................................................................................91

Page 88
Module 8: Multimedia programming and mobile devices

UF1: DEVELOPMENT OF APPLICATIONS FOR


MOBILE
DEVICES
This training unit will analyze the different technologies used in applications for
mobile devices, as well as the programming of said devices, taking into account the
main tools, models and techniques.

1. Analysis of technologies for applications on mobile


devices

This first topic will analyze the different technologies used in applications for mobile
devices, their limitations, integrated work environments, modules for developing
them, emulators, configurations, profiles, their life cycle and modifications of
existing applications.

1.1. Limitations of running applications on mobile devices

When developing a mobile application, there are some requirements that need to
be taken into account. Each platform poses a series of requirements that are
necessary when running an application correctly.

Here are some considerations to check beforehand:

• Android mobile devices are generally small or medium-sized and portable,


so the hardware has a limitation. This means that processing speed and
loading capacity are not as high as in desktop applications.
• The user interface must be adjusted to small screen sizes, which requires the
visible content to be organized appropriately.
• Most applications require a data connection (4G, 3G, Internet) which is
limited by its bandwidth. It is possible that, at times, the reception signal
from a device may be intermittent or non-existent.

Page 88
Module 8: Multimedia programming and mobile devices

• An important aspect, which is often overlooked, is security. Using these


applications that require data roaming means being exposed to a greater
number of security problems, such as viruses. These are defined as the
execution of services by third parties that are beyond the user's control.
These are applications that have access permissions to personal data stored
on the phone.
• Each device has certain hardware characteristics. Device memory may vary
depending on the manufacturer. This can result in poor application
performance, even preventing its installation because it does not meet the
minimum requirements.
• Another of the biggest problems that exist today is the battery consumption
by some applications. This fact is very important, since it is necessary to
optimize the resource consumption of an application.
• Storage: Internal storage, like memory, is a hardware component that
depends on the manufacturer of each mobile device.

A good practice is to try to optimize the application code


as much as possible, making sure that its size is not excessively
heavy.

In addition to these, there are other limitations specific to Android. When


developing an application that is intended to be marketed, it is important to
establish the range of devices for which the application's build version will be
accepted, establishing what the minimum and recommended version will be for its
operation.

Page 88
Module 8: Multimedia programming and mobile devices

1.2. Integrated work environments

Today, there are various Android app development platforms, which provide
developers with all the tools necessary to create applications for mobile devices.
These platforms are called IDE (Integrated Development Environment). Two of the
most notable and frequently used are Eclipse and Android Studio.

The choice of IDE is a subjective matter, which in most cases is determined by the
application developer himself.

Below is a comparison between both IDEs:

• Eclipse:
- It is a very powerful platform with a lot of debugging tools.
- It is part of a Java distribution, whose language is the basis of Android.
- It has a large number of plugins (ADT Plugin) with the Android application
development tools.
- Full integration with the SDK manager from the IDE with everything
needed to install all versions of Android.

$ eclipse
• Android Studio:
- It is the platform recommended by most developers.
- It is purely Android, since it is created to develop applications in this
language.
- Allows for easier plugin installation and integration than Eclipse.
- Compiling and exporting .apk files is easier.

- It is guaranteed to be supported by Google, as Eclipse will no longer have


this support in the Android plugin.
- Better optimize resources allocated to Android emulators.

Page 88
Module 8: Multimedia programming and mobile devices

Once the advantages and disadvantages of each have been assessed, the IDE used for
application development will be Android Studio. This offers greater guarantees for the
future in the development of mobile applications.

Android is an established language that has been in continuous development over the
last few years. It has spread throughout the world, being the most used operating
system on mobile devices. Prior to this growth, during the development of the first
versions, the language began to be documented, thus facilitating the creation of
applications for developers.

1.3. Modules for mobile application development

Before you start developing, you need to install the appropriate IDE. In this case,
Android Studio requires some previous features before starting its installation:

• Operating System: Windows 7/8/10 (32 or 64 bits).


• RAM: 2 GB minimum (8 GB RAM recommended).
• Hard drive: 2 GB free space minimum (4 GB recommended).
• Minimum resolution: 1 280 x 800.
• Java version: Java 8.
• Processor: 64-bit Intel processor (with virtualization technology).

Android Studio compiles applications using the Java language. Therefore, it is


necessary to install the JDK (Java Development Kit) which provides the development
tools for Java applications. This tool can be found on the Oracle website, where it will
be necessary to download it and then install it.

Once installed, the next step is to install Android Studio. During installation it will be
necessary to include a fundamental module for development. This is the SDK

Page 88
Module 8: Multimedia programming and mobile devices

Manager, which will allow you to download all the packages needed to compile
applications within the IDE. Another module that will need to be installed is the Java
Virtual Machine (JVM). This will be able to interpret the compiled Java language. In
addition, it will be responsible for executing all the instructions to be emulated.

1.4. Emulators

Android provides a tool to communicate with an emulator or a connected device. This


is called ADB (Android Debug Bridge), and it allows you to perform any action as if it
were a physical device, allowing you to debug any application. Android's own
emulator is AVD (Android Virtual Device), and it allows you to emulate all the
features of Tablet devices, Android Wear, televisions and mobile phones. In addition
to Android's own emulator, there are others that perform these same actions. One of
the most used and most flexible is Genymotion (which is paid, but provides a 30-day
free trial). It is a very effective emulator, which also allows complete integration with
Android Studio through its plugin. It is possible to choose the characteristics of a
specific device to emulate very quickly, even being able to create several devices with
different versions of Android.

1.5. Settings

• Types and characteristics:


Both Genymotion and the emulator provided by Android Studio allow you to emulate
even sensors and other features of a device, such as:

- Battery: Enabling this feature allows you to see how the phone behaves when
the battery is low, or when warning windows are displayed.
- GPS: activate or deactivate GPS using specific coordinates.
- Camera: Use the front or rear camera on your device.
- System Files: Access the device's file system.
- Remote control: control the emulator's behavior from another physical device.
- Identifiers: You can view and modify the Android and device identifier.
- Network: Control whether the device's data network is enabled or remains
offline.
- Calls: Emulate the behavior of the device when making a phone call or writing a

Page 88
Module 8: Multimedia programming and mobile devices

text message.

• Supported devices:

These emulators allow you to simulate different devices, such as: Google 4, Google
Nexus 5, Google Nexus 7, Samsung Galaxy S3, etc. They give you the possibility of
emulating a fully customized device, configuring the number of processors, memory
size, screen resolution, Internet connection, navigation bar interface and Android
version.

In the case of Genymotion, its installation and configuration is very simple; just
follow the steps provided in its documentation. Its versatility allows it to be
integrated with the main development IDEs such as Android Studio and Eclipse
through the plugin that exists for each of them.

1.6. Profiles

Characteristics

Emulators have pre-defined hardware profiles. This way, if a device that has already
been created matches the characteristics of the device you want to emulate, it would
not be necessary to customize a new profile. These profiles cannot be modified, as
they are included within the AVD.

If we need to modify the characteristics of our emulator, we can create a virtual


device by adapting the parameters to what we are looking for. This is done (with the
Android Studio emulator) in the AVD Manager / + Create Virtual Device. The
following information should then be configured:

• Name of the device.


• Device type: whether it is a Tablet, Android wear, mobile or Android TV.
• Screen Size: The physical size this screen would be in inches.
• Screen resolution: maximum resolution that the device has. This is measured
in number of pixels.
• RAM: Size of the device's RAM.
• Inputs: if the device has an external hardware keyboard. If this is selected, the
device will not display a built-in keyboard, but will instead pick up keyboard
events from the computer.
• Navigation Style: How the device will be controlled.
• Cameras: if it has a front and rear camera.
• Sensors: gyroscope, accelerometer, GPS, etc.

Page 88
Module 8: Multimedia programming and mobile devices

• Skin: Control the appearance of the device.

It should be noted that any type of memory chosen for our emulator will be taken
from the physical machine, that is, from our own computer.
Architecture and requirements

In order to be able to use these already defined profiles, it is important to take into
account some requirements. These are already defined in the Android
documentation. The fundamental requirement is that the equipment allows, due to its
hardware characteristics, device emulation and virtualization. For more details,
please consult the documentation provided by Android on its official website.

You can visit the official Android website at the following link:
www.android.com

Supported devices

All types of devices are supported and have a profile within the Android AVD. It is
possible to use, as already indicated, tablets, Android Wear, mobile phones or
Android TV.

1.7. Application life cycle

An application consists of one or more activities. An activity is


the component of the application that allows interaction with the user,
therefore, an activity is each of the screens that make up the application.

The activities are divided into two parts: the logical layer and the graphical part. The
logical layer is responsible for establishing the operation of the application, and is
located in the project's .java files. The graphical layer is made up of the xml files that

Page 88
Module 8: Multimedia programming and mobile devices

form the different layouts of the application. It is responsible for specifying the
elements that make up the different activities.
Therefore, each activity in an application needs to have a java file and an xml file. The
java file will be responsible for calling the xml file to load its content into the
application.

Activities have three states: resumed, paused and stopped, which will change
depending on the events that are launched.

• Resumed: the activity is currently running.


• Paused: The activity is stopped, although it is visible.
• Stopped: The activity is stopped, but is not visible.

Page 88
Module 8: Multimedia programming and mobile devices

As can be seen in the image, there are a series of events or occurrences that take
place to make state changes in the activity.

Page 88
Module 8: Multimedia programming and mobile devices

• onCreate: is the event produced when the application is created. Its function is
establish the corresponding layout of the activity and its important resources.
• onRestart: is the event that is triggered when an activity is stopped, before
being restarted.
• onStart: is the event that occurs before displaying the activity.
• onResume: is the event executed before the user interacts with the activity.

• onPause: is the event produced when the activity is not visible.


• onDestroy: is the event that occurs when the activity ends, that is, when the
finish() function is called. It is also produced automatically by the operating
system.
All these events are related, that is, each of these events has its inverse.

• The onCreate event reserves resources, while the onDestroy event releases
them.
• With the onStart event the activity is visible, while with the onStop event it
loses its visibility.
• With the onResume event the activity gains focus, while with the onPause
event it loses focus.
When you create a new project, the Android Studio IDE offers the ability to create a
starter activity for the project. But how do you create a new activity? Here are the
steps to follow:
Step 1
An activity is created within the project. As you can see, it will inherit from the
Activity class with its corresponding java file and xml. If it is created as an activity, it
will come related. If not, you need to specify the corresponding layout in the onCreate
event.

CODE:

super, encrest(savedInstanceState) ;
setContentView (R . layout. activity main);
Step 2
The activity is declared in the AndroidManifest.xml.

CODE:

application

Page13 88
Module 8: Multimedia programming and mobile devices

android :allowBackup="true"
android: icon=" @mipmap/ic Launcher"
android :label="@string/app name"
android: supportsRtl="true"
android: themes "style/ AppTheme" >
activity android :name=" . MainActivity " >
< intent-filter
action android: name=" android. intent. action. MAIN" />

< category android: name=" android. intent. category . LAUNCHER1' /></intent-


fi1ter>
< /activity >
< activity android: name=" . Main2 Activity "X / activity >
< /application

It is recommended to create the activity by right-clicking on app / New / Activity. This


will automatically create both the Java file and the layout and declaration of the new
activity in the AndroidManifest.

Once the new activity has been created, it can be launched. To do this, you must
create an Intent object.

An intent is an element of communication between the different components of


an application. These components can be internal or external.
Unintent is responsible for launching an activity or service in
our application, or launching a web page.
An intent class object must be created, indicating the context in which it is located
and the activity that you want to launch. Then the new activity starts.

There are two methods to launch the activity, which depend on whether you are
interested in receiving results or not from the new activity:
• startActivity(intent).
• startActivityForResult(intent, code): This method expects a result
associated with the established code.

Additionally, it is possible to send parameters to the new activity using the


putExtras() method, and receive them in the new activity using the getExtras()

Page14 88
Module 8: Multimedia programming and mobile devices

method.

CODE:

In the MainActivity

public ■void onClick(View view) {


Intent intent = new Intent(MainActivity.this, Ma in2Activity. class) intent.putExtra("name",
"Ilerna") ;
startActivity(intent) ;

In the Main2Activity

Bundle extras = getIntent().getExtras();


String name = extras, getString("name" ) ;

Page15 88
Module 8: Multimedia programming and mobile devices

1.8. Modifying existing applications

In the same way that it is possible to emulate already defined devices, this section
extends it to applications that are already developed. Android allows you to
download and modify a large number of example applications from its official
website. These applications can serve as a guide for the development of a new
application. You can reuse functions that already work correctly in these examples
and add them to the project you are developing.

It is also possible to add functionality to existing applications for developers with an


advanced level of Android. These applications can be found in a publicly accessible
repository, which keeps track of the versions and changes made by the different
developers.

Another possible situation that requires a modification is maintenance work. Once


an application has been launched, users often find possible errors or bugs in it,
which will later be fixed. These updates allow us to periodically modify all aspects
relating to this application.

These types of applications can be found in the samples section of the official
Android developers page.

You can visit the official Android developers page at:


https://android-developer.com/

Page16 88
Module 8: Multimedia programming and mobile devices

1.9. Using Application Manager Runtime Environments

The Android application runtime environment is known as Android Runtime (ART).


It is responsible for starting Android applications from version 4.4 KitKat onwards, as
a substitute for Dalvik (based on just-in-time (JIT) architecture). The architecture
used by this environment is Ahead-of-time (AOT), whose functionality is to create a
compilation file once the application has been installed on the device. This created
file is the one that will be used once that application is started again. This way, it will
not be necessary to recompile the application again and free up the load processor.

This environment offers some improvement in device performance by removing


unused objects and allowing application debugging, among others.

These runtime environments are responsible for managing the execution of


applications and are essential for the operation of applications within a device.

Page17 88
Module 8: Multimedia programming and mobile devices

2. Mobile device programming

In this second topic, we will cover programming for mobile devices, its main tools
and construction phases, user interfaces, its graphic context, events, animation
techniques, services, databases, and persistence. The thread model will be studied,
as well as the management of wireless communication. The sending and receiving of
text messages and multimedia messaging will also be studied, as well as the handling
of HTTP and HTTPS connections.

2.1. Tools and construction phases

Every application or development in Android is carried out using the so-called


Android SDK tools. These provide developers with everything they need to compile
and run applications.

The process of building and developing an application is determined by a series of


phases, which can be organized as follows:

Phase 1: Configuration

First of all, it is necessary to install all the required elements, both the programming
elements and the emulation environments, AVD in the case of Android.

The items that need to be installed are: Android SDK, Android Development Tools,
and Android Platforms.

Once these are installed, the ADV device emulator will be installed last.

Phase 2: Development

In this phase, the entire programming part of the application will be developed. This
must include the source code, any files or resources used, and the project's Android
manifest file.

Phase 3: Debugging and testing

Every application requires a process that ensures that the result obtained is correct

Page18 88
Module 8: Multimedia programming and mobile devices

and desired. This phase separates two processes:


• Debugging: This process should be integrated with the application
development. In this way, future errors will be detected and avoided,
allowing the application execution process to be observed in real time.
• Testing: These are carried out once the previous process has been completed
and a final version of the application has been obtained in the development
stage. This will allow the use of tests that offer the most reliable results
possible. This process can also help detect bugs in the app so they can be
fixed before it is published.
Android provides testing and debugging tools such as Android Testing and logging
documents such as logging tools.

Phase 4: Publication

It is the last phase of the application building process. At this point the application
executable file is generated.

The minimum and recommended SDK version, languages, graphic resolutions and
resources required must be defined for the correct execution of the application on
all devices. If you have any type of restriction, you have to review the
AndroidManifest.xml file. The package name must also be identified within the
AndroidManifest.xml file.

A digital certificate is then created and the application is signed. This will allow the
application to be installed on the devices. This signature will also allow the author to
be identified and prevent the application from being manipulated. A key associated
with the application will be generated and, finally, the already signed executable will
be obtained.

Once the executable has been obtained, it can be transferred to a physical device or
published on Google Play. In order to publish a publication on Google Play, you must
identify this application and accept the terms that Google requires. For more details,
please visit the official website.

The official Google Play page can be visited at the following link:

https://play.google.com/store

X___________________________________________________________________/

Page19 88
Module 8: Multimedia programming and mobile devices

2.2. User interfaces. Associated classes

(-------------------------------------------------------------------------------------------- \
Layouts are non-visible elements whose function is to establish the position
of the graphic components (widgets) of the application.
\/

There are different types:

• FrameLayout: Arranges all elements in the top right corner of the frame, i.e.
overlaps them. The usefulness of this layout is to make different elements
visible in the same activity in the same position.
• LinearLayout: Lays out all elements one after the other. This layout has
a property called Orientation, which can be horizontal or vertical. This indicates
whether the elements will be forming columns, or forming rows, respectively.
• RelativeLayout: Arranges elements anywhere in the layout. To do this, it is
usually placed based on the parent component or other components already
placed.

You can learn about the different properties of this type of Layout in
the following link:
https://developer.android.com/reference/android/widget/RelativeLayout.
LayoutParams.html

• TableLayout: Arranges the components in the form of a table. It is made up of


labels<TableRow> , which indicate the number of rows they will have. The
number of columns will depend on the number of components that the label
contains.<TableRow> .
• GridLayout: is another type of table, with the difference that the layout itself has
as a property the number of rows and columns it will contain. Components will
then be introduced into the layout and distributed automatically.

Within a layout you can find another layout or widgets, which include all the
elements that inherit from the Widget class. The most popular ones are: Button,
TextView, EditText, ListView, RadioButton, CheckBox and ToggleBar, among others.

Page20 88
Module 8: Multimedia programming and mobile devices

All these elements have a series of properties. First of all, the size properties, both
for width (width) and height (height), are mandatory in all activity components, that
is, both in layouts and widgets. Additionally, they can have different properties for
the margin, padding, text, font, or background, among others. Another of the most
important properties is the identifier, to be able to refer to this component.

Margin is the distance between two components, while padding is the space
between the component and its own content.

You can find out about the different widgets at the following
link:
https://developer.android.com/reference/android/widget/package-summary
.html
Units of measurement

The most common units of measurement that Android uses are match_parent (uses
all the allocated space, as big as its parent is) and wrap_content (fits the content to
its size). To express fixed sizes:

• dp. Density-independent pixels. It is an abstract unit based on the


density of a 160dpi screen. A dp is a pixel on a screen of this density, on the rest
they will be scaled to maintain the aspect ratio.
• sp. This measure is similar to the dp, but using the size of the
user-selected text fonts. Used in widget texts.
• pt. It's 1/72 of an inch, based on the physical size of the screen.

Page21 88
Module 8: Multimedia programming and mobile devices

• px. It corresponds to the current screen resolution. This is the worst measure to
use, since for similar sizes, the resolution can vary, so the appearance will vary
in the same way.
• mm, in. Based on the physical size of the screen expressed in millimeters or
inches.

2.3. Graphic context. Images

Menus

(--------------------------------------------
A menu is a component that contains a set of options to
navigate the application more quickly.
\/

There are different ways to create menus in an application: taskbar menus, context
menus, and side drop-down menus.

To create a menu it is necessary to create an xml file in the project/res/menu folder.


This file indicates the different items that the menu will have and, later, it will be
necessary to load this menu into the activity.

CODE:

In the xml file:

(menu xmIns : android="http://schemas . android. com/apk/res/android"> item android: id="@+id/Configuration"

android: icon=" @drawable/ic conf"


android: ti tl e=" string/ conf ">
(/item
( i tem android: id=" @+id/ Help"
android: icon=" @drawable/ic_ayuda"
android: ti tl e=" @string/help">
(/item
</menu>

Page22 88
Module 8: Multimedia programming and mobile devices

In the java file:

public boolean onCreateOptionsMenu (Menu menu) { MenuInflater inflater = getMenuInf later (J ;


inflater. inflate (R. menu. menu1, menu) ;
return true;
}

To interact with this menu, callbacks that respond to these events must be created:

• onOpcionsItemSelected: Used in taskbar menus, it collects the selected item


using the getItemId() method.
• onContextItemSelected: Used in context menus. Its use is similar to the
previous type.
CODE:

public boolean onOptionstemSelected(MenuItem item) {


//Menu possibilities switch (item.getItemld()) { case R.id.Configuration:
Toast.maieText(getApplicationContext(), "You have pressed SETTINGS", Toast. LENGTH_SHORT) .
show () ; return true;
case R.id.Help:
Toast.makeText(getApplicationContext(), "You pressed HELP", Toast.LENGTH_LONG).show(); return
true; default: return super.onOptionsItemSelected(item); }

For the side drop-down menus, there is a predefined template, in which the menu
and its interaction with the user are already established. To do this, you need to
modify the menu content to customize the application and set its operation based
on the item you have selected.

Notifications

There are two types of notifications:

• Toast: is the mechanism by which a message can be displayed in the activity.


These types of notifications are very useful for displaying messages of little
interest to the user, since at no time can there be any certainty that the user

Page23 88
Module 8: Multimedia programming and mobile devices

has seen it. The programmer only needs to indicate the message he wants to
display and its duration.
• Status bar notifications: Status bar notifications consist of an interface and
the element that will be launched when tapped, typically the main activity of
the app.

Important: These notifications must have an icon, a title and a message defined in
their
interface. Their removal when pressed must be specified by code
, since otherwise they will remain fixed
in the status bar.

In the following code you can see how to create the notification, to which you have
to add the element that will be launched when pressed.

Page24 88
Module 8: Multimedia programming and mobile devices

2.4. Events

Android applications are designed for touch devices, so many of their events are
related to this feature, that is, they are events of the View class.

Listeners are the View class interfaces that are responsible for capturing events, that
is, detecting user interaction and executing the corresponding instructions.

To capture an event it is necessary to implement it using the setOnEventoListener()


method of the view object on which it is to be captured.

CODE:

Button change = (Button) —i ndVi puRyTri {R . i ri . namh? ar) ;

Yochange. setOnClickLis have { new View.OnClickListener() { @Override public void


onClick(View view) {

} 1);

This is one way to program the event when a button is pressed, but there are other
ways that are just as valid.

The most popular events that are implemented are: onClick, onLongClick,
onFocusChange, onKey, onTouch, and onCreateContextMenu.

More information about


these events can be found at the following link:
https://developer.android.com/reference/android/view/View.html

Page25 88
Module 8: Multimedia programming and mobile devices

2.5. Animation and sound techniques

/------------------------------------------------------------------------\
An animation is the change of one of the properties of an object
that allows it to be seen over time with a different appearance.
Yo____________________________________________)

Android allows three types of animations:

• Frame-based animation: using the AnimacionDrawable class it is


possible to reproduce different images.
• View animations or Tween: allow the modification of the image using
different techniques, such as: translation and rotation, size or
transparency.
• Property animations: modifications to the object, not the view.

Frame-by-frame animation

It is one of the possibilities of the Drawable class. To create an image transition you
need to create an xml file in the res/drawable folder.

CODE:

< ?xml version= "1.0" encoding "utf-8"2>


<animation-list xmlns : android="http://schemas . android, com/apt /res/ android ">
< i tern android :drawable="@drawable/ image1 " android: duration= "750"/>
< i tern android: dr awable=" dr aw able/ image2 " android: dur ation = "750"/>
< / animation- list>

Once you have the xml file and the images you want to display in this directory, it is
possible to create different types of items, one for each of the images. These items
will have two attributes:

Page26 88
Module 8: Multimedia programming and mobile devices

• android:drawable -> the file path, without extension.


• android:duration -> the duration for which it will be displayed.

Then, in the activity in which you want to show it, you need to create the animation,
indicating the xml file, and start the animation.

Finally, the start() and stop() methods are used to perform the animation
operations.

CODE:

public class MainActivity extends AppCompatActivity {

AnimationDrawable animation;
Imageview myView;

Override
protected void onCreate(Bundle savedlnstanceState) { super. onCreate (savedlnstanceState) ;
setContentView(R.layout.activity main) ;

myView = (Imageview) f indViewById (R. id. ivAnimation);

myView.setBackgroundResource(R.drawable. images) ; animation =


(AnimationDrawable)myView.getBackground()

animation, start () ;
}

View animation

With this animation system you can transform some characteristics of some Views.
For example, if you have a TextView, you can move, rotate, grow, or shrink the text.

In the res/anim directory you have to create an xml document, which will contain

Page27 88
Module 8: Multimedia programming and mobile devices

the different animations for the object in the order in which you want to act.

The tags used for this are:<translade> ,<rotate> ,<scale> ,<alpha> . In addition, it is
possible to group several of these labels into groups, so that they are executed
simultaneously, and to do this, the label is used<set> .

Page28 88
Module 8: Multimedia programming and mobile devices

CODE:

<?xml version="1.0" encoding="utf-8"?>


<set xmlns : android="http : / / schemas . android, com/ apk/res/android">

<scale
android:duration="2000"
android: fromscale="2.0"
android:fromYScale="2.0"
android: toXScale="1.0"
android: toYScale="1.0" />
<rotate
android:startOffset="2000"
android: duration="2000"
android: fromDegrees="0"
android:toDegrees=”360"
android: pi vo tx=” 50 % "
android: pivotY=" 50% "/>
<translate
android:startOffset="4000"
android:duration=”2000"
android: fromXDelta="0"
android: fromYDe 1 ta = " 0 "
android: toXDelta="50"
android: to YDelta=" 100" />
<alpha
android:startOffset="4000"
android:duration="2000"
android: fromAlpha= " 1"
android:toAlpha="0" />

</set>

Once you have the xml, it is possible to create the animation, with the Animation class
and the created xml.
CODE:

public class MainActivity extends AppCompatActivity {

Override
protected void onCreate(Bundle savedlnstanceState) { super.onCreate(savedlnstanceState);
setcontentview (R. layout. activity main);

Textview text = (Textview) findviewById(R.id.tvAnimation);

Page29 88
Module 8: Multimedia programming and mobile devices

Animation animation = AnimationUtils . loadAnima tion ( context: this, R. anim. view)

text.startAnimation(animation);

More information about this type of work can be found in the following link.
animation:

https://developer.android.com/guide/topics/graphics/view-animation

Play an audio file

We create a raw folder inside the res directory. We will include our audio files there.
We look for the raw folder in the directory<nombre app> \app\src\main\res\raw. In
the example, an mp3 with the sound of a cat has been used.

For this practice, the MediaPlayer class will be used. The code in the java file so that
the sound repeats indefinitely at the time the main activity is created will be:

CODE:

public class MainActivity extends AppCompat Activity {

private MediaPlayer player;

Override
protected void onCreate(Bundle savedlnstanceState) { super.onCreate(savedlnstanceState);
setContentView(R.layout.activity main) ;

player = MediaPlayer.create( context: this, R. raw. cat); player.setLooping(true);


• //player, set Volume (20,20) ; // Pa.roa. control the volume
player.start();
}

Page30 88
Module 8: Multimedia programming and mobile devices

If you need to stop the audio, you would use the stop() method.

2.6. Service Discovery

(-------------------------------------------------------------------------------------------- \
A service is a process executed invisibly to the user. There are
two types of services: initiated and linking.
YO)

Service started

A component starts a service using the startService() method. In this way, the service
remains started in the background until its process ends. Even if the component that
launched it ends, this service will continue to run.

Page31 88
Module 8: Multimedia programming and mobile devices

Linked service

Unlike started services, bound services are created to link a component to a service. To
do this, it is necessary to do it using the bindService() method. These types of services
create a client-server interface that allows communication between the components
and the service. Different components can be bound to the same link, but when all
components remove communication with the service, the service ends.

All services must be declared in the AndroidManifest.xml file, using the tag<Service> .
All these types of services cannot communicate directly with the user, since they do
not have a graphical interface. To do this, they must use a communication mechanism,
such as Toasts or Notifications.

Page32 88
Module 8: Multimedia programming and mobile devices

More information about this can be found in the Android documentation.


the creation of a service:

https://developer.android.com/guide/components/services.html?hl=es-419

2.7. Databases and storage

There are different mechanisms for storing information:

• Internal databases: using the SQLite API, databases can be created in the
application.
• External databases: using web services, you can create connections to
databases on the Internet (for example, thanks to the platform provided by
Google, FireBase).
• Preferences: Allows you to store user settings in the
application.
• Content providers: These are components that allow data management
with other applications.
• Files: allow you to create files, both in the device memory
such as on an SD card, and use them as resources.
• XML: through different libraries, such as SAX and DOM, they allow data to
be manipulated in an XML.

SQLite Databases

SQLite databases are based on the SQL language, meaning SQL statements are
executed in the Android application.

Page33 88
Module 8: Multimedia programming and mobile devices

To do this, you must use the SQLiteOpenHelper class. Its use is normally based on
creating a class that inherits from this one and implements its two mandatory
methods: onCreate() and onUpgrade(). This class has a default constructor.

CODE:

public DataBase (Context context, String name, SQLiteDatabase.CursorFactory factory, int version) { super(context,
name, factory, version);

As you can see, four parameters are received:



Context context: The context from which the database is used.
• String name: The name of the database.
• SQLiteDatabase.CursorFactory factory: an object of type cursor and that
does not
is required (null may be entered).
• int version: The version of the database.
The onCreate() method is responsible for creating the database, so if the database
already exists, it will only open it. For its part, the onUpgrade() method is responsible
for updating the structure of said database, that is, if the version number is higher than
the one established, this method will be executed.

CODE:

verride
public void onCreate(SQLiteDat abase sqLite Database) {
String query = "CREATE TABLE users (_id INTEGER PRIMARY KEY AUTOINCREMENT, first name TEXT, last name TEXT, phone INTEGER) sqLiteDatabase.execSQL
(query);
1

@Ovecride
public void onUpgrade { SQLite Database sqLite Dat abase, int i, int il) {

Furthermore, the programmer will be able to implement all the methods needed for
database management.

Page34 88
Module 8: Multimedia programming and mobile devices

In addition, this class has two methods that allow you to open the database connection
in a read or write manner, depending on the type of operation you want to perform.
The methods are respectively getReadableDatabase() and getWritableDatabase().

CODE:

SQLiteDatabase bd = getWritableDatabase () ;

bd.execSQL( "INSERT INTO users (name, surname, phone) VALUES (

bd.close();

SQLiteDatabase db = getReadableDatabase();

Cursor cursor = db.rawQuery ("SELECT name, surname, phone FROM users order BY name", null);

Among the methods used to execute SQL statements are:


• execSQL: when there is no return value.
• query() and rawQuery(): to retrieve data from the database. In the query()
method the different parameters are specified to form the query, while in
the rawQuery() method the SQL statement is sent in the form of a String.

To know all the methods, it is possible to look at the documentation of the


SQLiteDatabase class:

https://developer.android.com/reference/android/database/sqlite/SQLite
Database.html

2.8. Persistence

Preferences

Preferences are used as a mechanism to permanently store data, mainly application


settings. For this purpose, the SharedPreferences class is used. The file created to store
this data is located in a folder called shared_prefs of the application.

To do this, it is necessary to create the graphic part. It is possible to do it in two ways:

Page35 88
Module 8: Multimedia programming and mobile devices

• Create an xml file stored in the specific xml folder for


working with preferences.

• Create an xml file stored in the layout folder, which is a layout


like any other activity.

Page36 88
Module 8: Multimedia programming and mobile devices

Once the interface is organized, it must be displayed. In case of using the


PreferencesScreen xml, the activity will inherit from PreferenceActivity instead of
Activity or AppCompatActivity.

As you can see, it is necessary to work with transactions.

A transaction is a set of instructions that must be executed


without making changes until they all finish, that is, if there are errors in
one instruction, the rest of the instructions will not have changes.

The transaction indicates that it should call a class that inherits from
PreferenceFragment() and load it on the current view.

Page37 88
Module 8: Multimedia programming and mobile devices

The fragment class will indicate which xml file should be displayed.

In the case of using a layout: in the Java code of the activity you will only have to load
the corresponding layout. In order for the values indicated in the application to be
stored, it is necessary, as indicated above, to work with the SharedPreferences class.

Writing data to file:

CODE:

SharedPreferences mypreferenceArchive = getSharedPreferences("archive", 0);


SharedPreferences .Editor editor = mypreferenceArchive.edit();

A file is created, in this case called file, and the second parameter corresponds to the
access mode.
There are different modes of access:

• MODE_PRIVATE: Only this application can access the file. Its value is 0.
• MODE_WORLD_READABLE: All applications on the device can
read this file, but it can only be modified by this. Its value is 1. It is not
recommended for use due to security flaws and has been deprecated since
version 17.

Page38 88
Module 8: Multimedia programming and mobile devices

• MODE_WORLD_WRITABLE: All applications on the device can read or write


to this file. Its value is 2. It is not recommended for use due to security flaws
and has been deprecated since version 17.
• MODE_MULTI_PROCCESS: Used when more than one application process
needs to write to this file at the same time. Its value is 4. It is not
recommended for use and is deprecated since version 23.

Next, there are the different methods of the Editor class to write to the file, which are:
putString(key, value), putBoolean(key, value) or putInt(key, value), among others.

You can consult all the methods of the class in the following link:

https://developer.android.com/reference/android/content/SharedPreference
s.Editor.html

Finally, the editor must be closed for all changes to take effect.

CODE:

editor, commit ()

Page39 88
Module 8: Multimedia programming and mobile devices

Read data from file:

Get methods are used to collect data from the file.

The different methods can be consulted at the following link:


https://developer.android.com/reference/android/content/SharedPreferences
.html

In addition to the getSharedPreferences (String file, int mode) method, you can use
the getPreferences (int mode) function, in which the file is created by default, and you
can only have one in the application.

Files

There are different ways to store files on an Android device. Files can be found in the
device's internal memory, external memory, or resources. Files found in resources are
read-only, so information cannot be stored in them, only read operations can be
performed.

Internal storage

All applications contain a folder to store files when installed on a device. This folder has
the path data/data/packageName/files. This folder is also uninstalled
automatically when you uninstall the app.

The package used for reading and writing files is java.io, which has already been
studied in Programming.

More information about this package can be found at the following link:
https://developer.android.com/reference/java/io/package-summary.html

External storage

Since device memory is limited, Android allows the option of using external storage,
usually an SD card, to store these files.

Page40 88
Module 8: Multimedia programming and mobile devices

Since these storage systems are not fixed, before starting to use them it is necessary to
check that they exist and are ready for use.

CODE:

if (Environment. getExternalStorageDirectoryi) . equals (Environment.MEDIA MOUNTEDREADONLY) ) { //Read-only


access
} else if (Environment. getExternalStorageDirectoryi) .equals (Environment.MEDIA MOUNTED) ) { //Read and write
access
} else {
//No access
1

Also, if the operation is writing, it is necessary to give permissions to the application for
this. As previously done, the WRITE_EXTERNAL_STORAGE permission must be
declared in the AndroidManifest.xml file, with the tag<uses-permission> .

Resource files

The /res folder of the Android project contains the application's resources, which are
read-only. These are the folders that contain the images (res/drawable or res/assets)
or the music (res/raw).

From the project it is possible to load any of these resources to make use of reading
them. Additionally, in the res/raw folder it is also possible to store files with the .txt
extension.

Music and image files usually take up quite a bit of space, and that is a disadvantage if
it is part of the app, as it would be a large app to download. Therefore, SD cards have
the possibility of storing these resources that belong to the application. The path to
these files is /Android/data/PackageName/files.

FILE CONSTANT
Music DIRECTORY_MUSIC

Page41 88
Module 8: Multimedia programming and mobile devices

Download DIRECTORY_DOWNLOADS
Pictures DIRECTORY_PICTURES
DCIM DIRECTORY_DCIM

All these folders also belong to the application, so they will be automatically deleted
when you uninstall it.

Page42 88
Module 8: Multimedia programming and mobile devices

2.9. Thread model

In any operating system it is possible to have different applications running at the


same time, each in a different thread, but it is also possible to have several threads
in the same application.

All processes in an application run within the same thread and the choice of process
execution follows a priority order. This order corresponds from lowest to highest
with the following:

• Processes empty.
• Processes in second flat.
• Services.
• Processes visible.
• Processes in the first place flat.
When you start the application, a thread called main will be created. This is the
thread that has been worked with until now.

Page43 88
Module 8: Multimedia programming and mobile devices

(--------------------------------------------------------•
---------------------------------------------------------TO
A thread is a subprocess or thread of execution, that is, it is a set
of tasks that are executed.

The characteristics of the main thread are:

• It is the only one capable of interacting with the user and is responsible
for collecting events.
• He is the only one who can modify the graphical interface, since he is the
one who can access its components.

In an application there will be as many main threads as there are activities.

If the main thread is busy with some operation, it will not be able to pick up user
interactions, and thus will give the appearance of a hung application. If this
continues for several seconds, the Operating System will throw a message that the
application is not responding.

The first solution to this problem is the creation of new threads using the
Java Thread class:

https://developer.android.com/reference/java/lang/Thread.html

Another more widespread solution is to create asynchronous tasks.

(--------------------------------------------.
An asynchronous task is the execution of instructions in the background.
These objects inherit from the AsynxTask class.
\)

The AsyncTask class is responsible for executing background tasks, with the
characteristic of allowing these tasks to modify the graphical interface. This is
possible through two mandatory methods:

Page44 88
Module 8: Multimedia programming and mobile devices

• doInBackground(): is responsible for executing the code in the secondary


thread.
• onPostExecute(): is responsible for executing the code in the main
thread.

Operation

When creating an Asynctask, it is first necessary to prepare the data, although this is
not mandatory. This is done on the main thread, but you need to put the code inside
the asynchronous task's onPreExecute() method. For example, it is common to find
that this method indicates the mechanism by which the user will know the status of
the activity, and it is initialized.

Then, in the doInBackground() method, all those instructions that could stop the
application if they were running in the foreground are executed, such as web
requests.

During the execution of the asynchronous task, the publishProgress() method can be
called, which will execute the code in the onProgressUpdate() method on the main
thread. For example, to update the status of the waiting bar.

Finally, when the doInBackground() method finishes, the onPostExecute() code is


executed. It is necessary to know that the value or object returned by
doInBackGround is the one that this last method receives as a parameter.

Additionally, it is possible to end this thread, to do so, the onCancelled() event must
be implemented using the cancel() method.

Page45 88
Module 8: Multimedia programming and mobile devices

2.10. Communications. Associated classes. Types of connections

The client-server model is based on an architecture in which there are different


resources, which are called servers, and a certain number of clients, that is, systems
that require those resources. Servers, in addition to providing resources to clients,
also offer a series of services that will be studied in the next chapter.

In this model, clients are responsible for making requests to the servers, and they
respond with the necessary information.

Currently, there are various applications that use this model, such as: email, instant
messaging and Internet service. That is, it is used whenever a web page is accessed.

It should be noted that in this model, roles are not exchanged between clients and
servers.

Page46 88
Module 8: Multimedia programming and mobile devices

Sockets are also an example of the client-server model.

A socket is a mechanism that allows communication between applications


over the network, that is, it abstracts the user from the passage of information
between the
different layers.
Its main function is to create a communication channel between applications
and simplify the exchange of messages.

There are two types of sockets: connection-oriented and connectionless.

In module 9, Programming services and processes, you will learn the


differences between socket types, and the Java classes used
depending on the socket type.

Since this is a service that requires an Internet connection for communication, you
need to have the Internet permission defined.

Page47 88
Module 8: Multimedia programming and mobile devices

2.11. Wireless communication management

(-------------------------------------------------------------------------------------\
A broadcast receiver is an application component that is responsible for
receiving messages sent by the Operating System and by
other applications.
\______________________________._____________________________________J

To declare a broadcast receiver in the application, it is necessary to indicate it in the


AndroidManifest.xml file, using the tag<receiver> and<intent-filter> to indicate the
action to which they respond. This way, it will always be available, that is, from the
moment the application is installed until it is uninstalled.

To avoid this, it is possible to declare the broadcast receiver for specific times, using
Context.registerReceiver() and Context.unregisterReceiver(), which are usually
found in the onResume() and onPause() methods, respectively.

Broadcast messages can be sent in two ways:

• They are sent using the Context.sendBroadcast method: if this method is


used, messages are asynchronous and may arrive out of order.
• They are sent using the Context.sendOrderedBroadcast method: they are
sent one by one, guaranteeing their order, in a synchronized manner.

One of the uses of this concept will be seen in the reception of messages.

2.12. Sending and receiving text messages. Security and permissions

In order to be able to send text messages from the Android application, it is


necessary to grant permissions in the AndroidManifest.xml, as has been done
previously. The permission that must be given for this is SEND_SMS, using the
tag<uses- permission> .

Page48 88
Module 8: Multimedia programming and mobile devices

Text messaging can be done in two ways: from another activity or directly.

To send a text message from another activity, you must create a communication
element that indicates the phone number to which this message will be sent. This
intent will also have the message content and type vnd.android- dir/mms-sms. The
message will then be sent as a new activity.

CODE:

Intent send = new Intent (Intent. ACTION^^IEW, Uri.fromParts("sms", number, null)) send.
putExtra("sms_body", "Message content");
send. setType("vnd. android-dir /mms-sms");
startActivity(submit);

It is also possible to send messages directly and for this you need the SMSManager
class. This class has the sendTextMessage method which is the one that will send the
text message.

The syntax of this method is the following: void


sendTextMessage(String destinationAddress, String scAddress, String text, Pending
gIntent sentIntent, PendingIntent deliveryIntent).

In this method, the fields that need to be filled out are: destinationAddress, with the
phone number to which you want to send the message, the text you want to send,
and the sending element, which contains the relevant information about the
message.

/------------------------------------------------------ ---------------------------------------------------->
A PendingIntent is a communication with the Operating System that
is not known when it will take place.
Yo__________________________________________________________________J

CODE:
Pendingintent send = Pendingintent, ge tAc ti vity(MainActi vi ty. this, G, new Intent (MainActivity. this, Main2Activity. class), 0);
SmsManager manager = SmsManager.getDefault{);
manager. sendTextMessage(number, null, 'message1', send, null);

It is also interesting that after sending the message, its delivery is confirmed. To do

Page49 88
Module 8: Multimedia programming and mobile devices

this, the SMSManager class allows you to know the status of the message, using the
deleveryIntent field.

To receive a message, a new RECEIVE_SMS permission must be granted. Also, in the


AndroidManifest you have to register the receiver that will be in charge of receiving
text messages.

CODE:

receiver android: name=" .ReceiverSMS">


<intent-filter>
<action android: name=" android. provider. Telephony . SMS_RECEIVED " />
</ intent- f ilter>
</receiver>

<uses-permission android: name=" android. permission. RECEIVE_SMS " />

2.13. Sending and receiving multimedia messages. Content


synchronization. Security and permissions

To send multimedia messages, you only need to change the content and type of
message.

To receive a multimedia message, the permission that must be given to the


application is RECEIVE_MMS.

Page50 88
Module 8: Multimedia programming and mobile devices

2.14. Handling HTTP and HTTPS connections

(---------------------------------------
---------------------------------------TO
The HTTP protocol is a client-server protocol that is responsible for
exchanging information between web browsers and their servers.
1)

The HTTPS protocol is an application layer protocol, based on the HTTP protocol,
that is, it adds security to said protocol. Uses SSL/TLS based encryption. While the
HTTP protocol uses port 80, HTTPS uses 443.
If you want to create an application with Internet access, you must take into account
the necessary permission.
CODE:

Android has two packages that allow the development of this type of applications:
java.net and Android.net.

As in a Java application, to connect to a web page it is necessary to make an HTTP


connection, and to do so the URL object must be created, with the web address, and
then the request must be made with the HttpURLConnection class.

CODE:

URL url = new URL("http://google.com");


connection = (HttpURLConnection) url.openConnection();

These connections can be implemented as requests to a server where a database is


stored, in this way, it is possible to access data external to the application.

To do this, it is recommended to start with a local server, such as WAMP, which


offers a connection to an APACHE server, with PHP and MySQL, and then deploy it
on the chosen web server.

Page51 88
Module 8: Multimedia programming and mobile devices

(------------ ----
-------------h
PHP is a server-side programming language used
for dynamic web content development.
.J

How this works is summarized as follows:

• Create the different files with the .php extension, which are responsible
for communication between the application and the database. These
files contain the various queries to the database.
• Make a web request to the server: http://localhost http://127.0.0.1,
indicating the path to the files.

http://localhost/archivosPHP/mostrarUsuarios.php

Page52 88
Module 8: Multimedia programming and mobile devices

UF2: MULTIMEDIA PROGRAMMING


In this Training Unit, integrated multimedia libraries and their main uses will be
analyzed in detail.

1. Using integrated multimedia libraries

This topic will study the concept of multimedia applications, the architecture used,
multimedia data sources, multimedia object processing and their playback.

1.1. Multimedia Applications Concepts

A multimedia application is one that contains different types


of information integrated in a coherent manner. These types of information can
be: text, audio, images, videos, animations or interactions.

Almost all of the apps currently on the Android market are considered multimedia
apps, since they all contain some of these elements.

All of these applications must manage all of the information. The treatment of audio
is not the same as that of an image, since each of these types of information has
classes and methods that allow its functionality.

In this chapter we will study these classes and their most popular methods.

Page53 88
Module 8: Multimedia programming and mobile devices

1.2. Architecture of the API used

The Android architecture is divided into different layers, always keeping in mind that
the core is Linux (we can say that Android is a version of Linux). This is the
composition of the Android OS, defined by the following layer diagram:

System Apps

Dialer E-mail Calendar Camera ...

Java API Framework


Managers

Content Providers
Activity Location Package Notification

View System Resource Telephony Window

Native C/C++ Libraries Android Runtime

Webkit OpenMAX TO THE Free Android Runtime (ART)

Media Framework OpenGL ES ... Core Libraries

Hardware Abstraction Layer (HAL)

Audio Bluetooth Camera Sensors ...

Linux Kernel
Drivers

Audio Binder (IPC) Display

Keypad Bluetooth Camera

Shared Memory USB WIFI

Power Management

• Linux Kernel is the lowest layer. It is responsible for managing all the
compatibility between the hardware, that is, it is where we will have all the
drivers for Wifi, USB, screens... This layer has those small softwares that are
responsible for compatibility.

Page54 88
Module 8: Multimedia programming and mobile devices

• Libraries, They are not at the hardware level like the previous layer, but at
the software level. They are responsible for the compatibility of 2D and 3D
animations, font types, data managers, web browsers…
• ART (Android Routine) is the layer where all the magic of our apps with the
Android OS is handled.
ART → It is the virtual machine, which makes the apps run on the OS
(without this, you would never be able to run your applications). It
makes your apps weigh a little more, but consume new resources.

• Applications Framework is the layer which we will be working on throughout


the course. Here are all the Java classes that the Android SDK has ready for us
to generate new classes and new mobile apps.
• System Apps is the last layer, where the end user acts.

Google has created a development API for Android applications, in which some of
the products such as Maps, Firebase or PUSH Notifications have been successful in
recent times.

On the next page, the developer has all the tools


that Google offers that he can use for the development of his application.
https://developers.google.com/?hl=es-419
_________________________________________________________J

Cloud Platform

Get your apps ready for the latest Everything you need to build and
version of Android. scale your enterprise, securely.

• Firebase To Android Studio

The tools and infrastructure you need The official IDE for building apps on
to build better mobile apps. every type of Android device.

3 Web TensorFlow

An open-source software library for


Develop the next generation of
Machine Intelligence.
applications for the Web.

Page55 88
Module 8: Multimedia programming and mobile devices

1.3. Multimedia data sources. Classes


/--------------------------------------------------
-------------------------------------------------TO
A content provider is the mechanism that allows you to share data with
other applications, as well as obtain data from external applications.

The classes used to obtain information are:

• Browser: Obtain information about browsing history or history


of searches.
• Calendar: Obtains information about calendar events.
• CallLog: Obtain a record of the latest calls, both incoming or missed, and
outgoing.
• Contacts: Gets the list of contacts on the device.
• Document: the different text files are obtained.
• MediaStore: Obtains the various audio files, images, videos, etc., that are on
the device, both in internal and external memory.
• Setting: Gets system preferences.
• Telephony: you can obtain the different messages, both text and
multimedia, received or sent.
• UserDictionary: Gets the words defined by the user and the most
used.

It is important to note that in order to use this information, the user must accept
access to it. Therefore, in the application it is necessary to specify the permission of
the information with which it is going to work, in the AndroidManifest.xml, as
already mentioned above. For example, READ_CALENDAR, READ_CALL_LOG,
READ_CONTACTS or READ_SMS.

In this link you can check all the permissions that can be
set in an application:

https://developer.android.com/reference/android/Manifest.permission.html

Page56 88
Module 8: Multimedia programming and mobile devices

1.4. Time-based data

Firebase is a real-time database located in the cloud.


Data is stored per node, not as SQL records. There is a node
with different information and then another node.

These are the steps to follow to implement Firebase to an Android project.

In console.firebase.google.com (Firebase page where you must have a Gmail


account) a new project is created.

Create a project

By default, your Analytics data will power other features in Firebase and
other Google products. You can control how your Analytics data is shared at
any time from your settings. More information

CANCEL CREATE PROJECT

Firebase is added to your Android app.

Add Firebase to
your
Android app

Page57 88
Module 8: Multimedia programming and mobile devices

Now you need to enter the application package name and SHA-1. It is necessary to
take into account whether the application being developed has a testing purpose or
a more professional one (for example, with the intention of uploading it to
GooglePlay), since it must be configured in debug mode for the first option and
without debug mode for the second.

Android Package Name @

com.example.online.ilernaFirebase

App nickname (optional) @

Freemium App for Android

Debug SHA-1 signing certificate (optional) ©

00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00

Required for Dynamic Links, Invites, and support with a phone number or Google Sign-in in Auth. You can edit the SHA-1s
key in Settings.

CANCEL REGISTER APP

This will generate a file with a .json extension that will have to be downloaded and
installed in the project branch → Project/<nombre del proyecto> /app. Once this
file is introduced into the directory, it will be introduced into Gradle Scripts:

dependencies { // ...
compile 'com.google.firebase:firebase-core:15.0.0'

The wizard will show the changes that need to be made to the application. After
doing this, communication between the project and the database will already exist.
Once Firebase is implemented in our project, it will be possible to create nodes in
the database. The following image shows the creation of the first node.

CODE:

Page58 88
Module 8: Multimedia programming and mobile devices

DatabaseReference db = FirebaseDatabase . getInstance() -getReference () . child(“warehouse”);

To collect data from the database it is necessary to do so with a Listener, since it is a


real-time database and will show the modified data. The collected data is parsed
into an object, if more than one is collected.

CODE:

ValueEventListener event = new ValueEventListener() { Override


public void onDataChange(DataSnapshot dataSnapshot) {
Class c = dataSnapshot.getValue(Class.class);
}

Override
public void onCancelled(DatabaseError databaseError) {

}
};
db.addValueEventListener(event);

To send new data to the Firebase server, it is also necessary to send it parsed, that
is, the complete object.

CODE:

db. push () . setValue (p)


1.5. Multimedia object processing. Classes. States, methods and
events

Maps

As mentioned above, the Google Console allows you to create a Google Maps API
project. Once created, instead of being assigned to an Android project as in Firebase,
it provides a key that can be pasted into all projects that use the map.

Android Studio offers the possibility to create an activity using a map template,
which will make it easier to create the layout from scratch. Once the map is created,
it is possible to indicate the key obtained in the Google Maps API in the
google_maps_api.xml file located in the res/values folder.

Page59 88
Module 8: Multimedia programming and mobile devices

CODE:

<string name="googlemapskey" templatelergeStrategy="preserve" translatable=" false ">YOUR_KEY_HERE</string>

Classes for using the map:

• CameraUpdate
• GoogleMap
• LocationManager

Sensors

Sensors are devices that collect information from the outside environment. All
Android devices contain a series of sensors that allow better use of it.

The classes used for sensor use are found in the android.hardware package and are:
Sensor, SensorEvent, SensorManager and
SensorEventListener.

r----------------------------------------------------
--------------TO
Sensors on Android devices are not always available, so it is
always necessary to check before using them.
IJ

Some of the most important sensors in devices are:

• Accelerometer.
• Gravity.
• Gyroscope.
• Linear accelerator.
• Rotation.
• Proximity sensor.
• Brightness.
• Pressure.
• Temperature.

Page60 88
Module 8: Multimedia programming and mobile devices

• Humidity.

Page61 88
Module 8: Multimedia programming and mobile devices

1.6. Playing multimedia objects. Classes. States, methods and events

There are a large number of Android classes that allow you to use different
multimedia resources. These are some of them.

For audio:

• AudioManager: Manages different audio properties.


• AudioTrack: Play audio.
• AsyncPlayer: Play audio on a secondary thread.
• SoundPool: Play audio.

For audio and video:

• JetPlayer: Play audio and video.


• MediaController: Allows you to view the MediaPlayer controls.
• MediaPlayer: Play audio and video.
• MediaRecoder: Record audio and video.

For video:

• VideoView: Play video.

For the camera:

• Camera: Allows you to use the camera for photos and videos.
• FaceDetector: Detect faces from the camera.

Once the classes that allow working with multimedia objects have been learned, the
unit focuses on MediaPlayer and Camera. A schematic of MediaPlayer is shown on
the next page.

MediaPlayer

As you can see in the image, an audio goes through many states, from when the
resource is called until it finishes playing.

Page62 88
Module 8: Multimedia programming and mobile devices

Whenever you want to play a music file, you must first specify the path to the file.
Then it is necessary to prepare the audio for later execution. All this has already
been explained in more depth (with code included) in point 2.5 of UF1.

Please note that audio preparation can take some time, so it is necessary to do it in
the background.
Camera

Devices that include a camera give us great opportunities to integrate images or


videos immediately into our application. There are two ways to use the camera in an

Page63 88
Module 8: Multimedia programming and mobile devices

app: using the camera directly, or using the camera app.

If the first option is carried out, it is necessary to give CAMERA permission to the
application (in the AndroidManifest.xml file) in order to use it. If the second option is
used, it is not mandatory to have access to the camera.

<uses-permission android: name=" android, permission. CAMERA"/>

It is recommended to specify the hardware requirements in the manifest file,


indicating whether the camera is required or just suggested:

<uses-feature android: name=" android. hardware . camera" android: required="f alse"/>

Let's not forget to request all the necessary permissions associated with our
application, such as access to external storage, Internet access, etc.

Once the camera has been opened in the application itself, it is necessary to detect
the camera, and if so, access it using the Camera.open() method. Once the camera
is finished being used, the resources need to be released with the release() method.

Page64 88
Module 8: Multimedia programming and mobile devices

UF3: DEVELOPMENT OF GAMES FOR MOBILE


DEVICES
This training unit will cover the development of games for mobile devices, with an
analysis of game engines and the study of the development of 2D and 3D games.

1. Game Engine Analysis

This first topic provides an in-depth analysis of game engines, studying the concepts
of animation, game architecture and its components, types of game engines and
their uses, areas of specialization, main components of the game engine and
libraries. In addition, a study of existing games is carried out, as well as the
application of possible modifications to them.

1.1. Animation concepts

Nowadays, mobile devices are part of people's daily lives, grouping all user needs
within a single device. This fact makes mobile devices used both for work or
communication purposes, as well as for leisure. This is where the development of
animated applications for devices comes in.

/
-----------------------------
TO
An animation is the change of one of the properties of an object
that allows it to be seen over time with a different appearance.
1)

The basis for creating these animations lies in programming. Android offers a
number of mechanisms dedicated to creating animations for both 2D and 3D
objects.

Page65 88
Module 8: Multimedia programming and mobile devices

• Canvas: is the template or canvas that allows you to define a control at the
user interface level in applications. Canvas can represent any object such as
ovals, lines, rectangles, triangles, etc.
• Animators: is the property that allows you to add a specific animation to any
object through the use of properties or programming styles.
• Drawable Animation: Allows you to load a series of Drawable resources to
create an animation. It uses traditional animations, such as placing one
image after another in order as if it were a movie.
• OpenGL: is one of the most important libraries for high-performance 2D and
3D graphics. Android includes support for its use.

1.2. Game Architecture: Components


Before starting the development of a game for mobile platforms, it is necessary to
first define what its architecture will be. This architecture will allow you to detail
what the application development structure will be like.
This structure will be made up of a series of blocks with a specific function within the
game:
• User interface: will be responsible for collecting all the events that the user
has created.
• Game logic: This is the central part of the video game. It will be responsible
for processing all user events and continuously drawing the game scene. This
is what is known as Game loop. The state of the game is continuously
checked and a new interface is drawn for each state, repeating this process
almost infinitely.
Collisions and sprites will also be controlled in this block. By using libraries
such as OpenGL, the different characters within the game are modeled and
designed. Through this, it is possible to generate animations in characters
using sprites (images with transparency). This is known as canvas rendering.
Each rendering will be encapsulated in an animation frame.

• Resources used: within the logic of the game it is also necessary to control all
the sound effects and images.

This block is the fundamental part for the developer. All functions have to be

Page66 88
Module 8: Multimedia programming and mobile devices

programmed and controlled through code.


Games, unlike applications, require greater consumption of the device's
resources, and in many cases can use almost all of them. It is therefore the
programmer's task to optimize the use of these resources.

• Android Framework: Android provides a powerful framework that allows you


to animate a large number of objects and represent those objects in a
multitude of ways. To do this, it has different mechanisms that offer the
developer these functionalities:
- Property Animations: allows you to define some properties of an object
to be animated, defining options such as: duration of an animation,
interpolation time, repetition of behaviors or animations, grouping of
series of animations sequentially, frames for updating an animation, etc.
- View Animations: allows the use of different view animation mechanisms,
such as translations, rotations, scaling, etc. They give the sensation of
transformation of one image into another at a given time.
- Drawable Animations: Using Drawable resources, you can create a series
of animations as if it were a movie.

• Output: These are the succession of scenes that are updated in the user
interface.

These frames are processed one after the other, giving the user a sensation of
continuous movement and animation. All these blocks of frames result in the game
scene at different moments in time.

1.3. Game Engines: Types and Uses

A graphics engine or video game engine is the graphical representation of


a series of programming routines, which offer the user a design in
a 2D or 3D graphical scenario.

The main task of an engine is to provide the game with a rendering engine for 2D
and 3D graphics, physics engine or collision detector, sounds, programming scripts,
animations, artificial intelligence, memory management and a graphical scene.
Nowadays there are a wide variety of graphic engines, such as: Ogre (which is open
source), Doom Engine, Quake Engine, Unity, cryengine, source engine, Unreal
engine, Game Maker, etc.

Page67 88
Module 8: Multimedia programming and mobile devices

These engines typically provide:


• API and SDK for development.
• Some engines allow you to create games without writing code. It is only
necessary to use the different mechanisms implemented and documented in its
API. Some require the use of their own programming language.
• Editing toolset.
• Although they do not provide ready-made sets of visual elements, they allow
their creation through the editing tools of the software provided.
Engines can be classified based on:
• According to the facilities offered:
- Graphic libraries: according to their ease of development and use. SDL, XNA,
DirectX, OpenGL.
- Engines: If the engine already has a complete visual development or requires
programming scripts for the use of visual elements, for example: OGRE,
Unreal or id Tech, are some of those that require the use of support scripts
for functionality.
- Specialized creation tools: some of the engines have been developed with an
exclusive character, oriented in their purpose, such as video games or other
types, for example: GameMaker or ShiVa, which are for the exclusive
development of game applications. Unity, for example, can be used for
different genres.

• According to the license:


- Private engines (some with free license).
- OpenSource Engines.
The choice of a particular graphics engine is very subjective. In most cases, this
choice will be determined by the type of game application to be developed and the
resources available for its development.

Page68 88
Module 8: Multimedia programming and mobile devices

1.4. Areas of specialization, libraries used and programming


languages
The choice and use of graphic libraries during the development of a game is always
necessary. This allows certain animations to be performed on objects in a simpler
way. These libraries can be classified by areas of specialization based on their
functionality:
• Renderings and effects: OpenGL, Direct3D, GKS, PHIGS, PEX, GKS, etc.
• Based on scene graphs: OpenGL Performer, Open Inventor, OpenGL
Optimizer, PHIGS+, etc.
• Graphical tool libraries: World Toolkit, AVANGO, Game Engines, etc.
It is possible to program a video game in a multitude of languages. The most used in
video game development are C, C++, C# and Java.
The use of one language or another is defined by the type of game you want to
develop. 2D or platform games, which work with simple sprites, are usually based on
the C language. In the case of games with greater graphical complexity, especially
when working with three-dimensional objects and their properties, most
programmers use C++, C# and Java. Since they require the use of a virtual machine,
they are sometimes less chosen, although their power in game development is also
high.
There are a large number of libraries that can be used during the development of a
game. Some of the bookstores are as follows:
• Allegro: Free and open source library based on C language. Allows the use of
graphic elements, sounds, devices such as keyboard and mouse, images, etc.
• Gosu: library that allows the development of 2D games and is based on C++
and Ruby languages. It is free software under the MIT license. Provides a
game window that allows the use of keyboard, mouse and other devices. It is
characterized by its use for sounds and music within a game.
• SDL: set of libraries for the design of 2D elements, also free software. It
allows the management of multimedia resources such as sounds and music,
as well as the processing of images. It is based on C language, although it
allows the use of other languages, such as: C++, C#, Basic, Ada, Java, etc.
• libGDX: based on Java. Library oriented for use in multiplatform applications
that allows you to write code in a single language and then export it to other
languages. Allows easy integration with other Java tools.
• LWJGL: library for developing games in Java language.
Provides access to the use of libraries such as OpenGL.

• OpenGL: graphics library for developing 2D and 3D games. It is one of the

Page69 88
Module 8: Multimedia programming and mobile devices

most widely used libraries today. It is free and open source software. It
allows the use of basic elements such as lines, points, polygons, etc., as well
as other more complex elements, such as: textures, transformations, lighting,
etc.
• Direct3D: set of multimedia libraries. It is OpenGL's biggest competitor in the
gaming world. It allows the use of elements such as lines, points or polygons,
as well as management of textures or geometric transformations. Property of
Microsoft.
1.5. Components of a game engine
A game engine is a fundamental part of a game's programming code. This graphics
engine is responsible for most of the graphical aspects of a game.
One of its tasks is to establish communication and take advantage of all the
resources that a graphics card offers.
The main components of a game engine are:
• Libraries: all those libraries used for the development of figures, polygons, lights
and shadows, etc.
• Physics engine: responsible for managing collisions, animations, programming
scripts, sounds, physics, etc.
• Rendering engine: It is responsible for rendering all the textures of a map, all the
reliefs, object smoothing, gravity, stripe drawing, etc.
These components globally collect all the elements that appear within a game. Each
of these elements is part of a set of resources that can be found in every graphics
engine:
• Assets: all 3D models, textures, materials, animations, sounds, etc.
This group represents all the elements that will be part of the game.
• Rendering: All textures and materials in this part make use of the
resources designed for the graphics engine. This will show the visual aspect and
potential of a graphics engine.

• Sounds: It is necessary to configure within the engine how the audio tracks will
be. The sound of the video game will depend on the processing capacity of
these sounds. Some of the configurable options are: tone modification, looping
of sounds, etc.

• Artificial intelligence: This is one of the most important features that a graphics
engine can develop. This adds incentives to the game, allowing the
development of the game to occur based on decision-making defined by a set of
rules. In addition, it defines the behavior of all elements that are not controlled
by the user player, but are part of the game elements.
• Visual scripts: Not only is it possible to execute portions of code defined in the

Page70 88
Module 8: Multimedia programming and mobile devices

game, but they can also be executed in real time within the graphical aspect of
the game.
• Shading and lighting: the graphics engine provides colors and shadows to each
of the vertices that form part of the scene.
As has been seen, the tasks that make up a graphics engine require the use of a large
number of resources within the computer. Hence, the higher the processing power
and speed of a graphics card, the better the result of a game scene. To reduce the
cost of this, some engines employ a number of techniques that allow terrain or
materials to be rendered without consuming resources, but instead appearing within
the visual space, which is known as culling.
Graphics engines are a key aspect within a game. They have been created exclusively
for the development of video games, and today they are the fundamental tool for
creating video games. The evolution of games and entertainment is linked to the
evolution of game engines.

1.6. Libraries that provide the basic functions of a 2D/3D engine

As already mentioned in some of the previous sections, libraries are a key section in
the game development process. To give an object or element a more realistic
appearance, graphics engines need to process a series of functions, which draw
these objects in 2D or 3D. Since the design of an object requires a great deal of
programming work, and it will be represented on many occasions in a game, the
creation of these objects is covered by a series of functions that are provided by the
libraries.

These libraries allow the programmer to abstract from the more complex aspects of
representing visual elements; it is only necessary to call the function of the library in
charge of this and collect the returned object for representation in the scene.

The basic functions used by graphics engines are those that allow working with
visual elements, such as points, lines, planes or polygons. They provide the
fundamental resources in a game such as sounds and music. The character modeling
section should include the use of sprites for 2D, and the use of models (assets) for
the development of 3D game platforms.

Page71 88
Module 8: Multimedia programming and mobile devices

1.7. 3D graphic API

OpenGL is a 3D drawing API that enables applications that


produce graphics. This API consists of a large number of functions for
creating three-dimensional elements and objects.

The goal of these APIs is to provide the developer with a document where they can
find all the resources, and, in this way, reduce the complexity in communicating with
the graphics cards.
The operation of this type of library consists of trying to accept as input a series of
primitives, which are: lines, points and polygons, and convert them into pixels.
OpenGL is currently at version 4. Each of these versions has been developing an
evolution in terms of textures, shapes and transformations of the objects. It is
possible to use any of them, using the documentation provided on their official
website. This documentation offers numerous example codes, books or video
tutorials to make use of the desired library.
Starting with version 3, OpenGL developed its own rendering language called GLSL.
This allows the development of a scene to be carried out through programming.
Another API that offers this same type of 3D graphics is Direct 3D. It offers a low-
level 3D API, in which you can find basic elements such as: coordinate systems,
transformations, polygons, points and lines.
It is a library with graphic resources that require a level of programming experienced
in this type of resources. One of the strong points of this API is that it is independent
of the type of device to be used, which allows for more versatile development.
1.8. Study of existing games

Nowadays, the gaming market for devices is very large. There are already a huge
number of games for all possible types of genres. This often makes it difficult for
some of them to succeed. For this reason, it is advisable to carry out a market study
before its development, focusing on those games of a similar nature to the game to
be created.

If the game is going to be published on the Internet, it is important to know what


type of audience it is intended for. Likewise, it is good to know what the
development limitations are, as well as to measure the amount of resources
needed for its creation, development and publication.

Page72 88
Module 8: Multimedia programming and mobile devices

In many cases, the development of a type of video game genre is related to the
success of one of them. When downloads of a particular game increase significantly,
it means that that type of game is a good draw for users. This fact can be exploited
by game developers with fewer resources to enter the market.

Both 2D and 3D games are accepted in the mobile gaming industry, so the range of
possibilities is unlimited.

Page73 88
Module 8: Multimedia programming and mobile devices

1.9. Applying modifications to existing games

Android games account for almost the largest percentage of downloads of leisure-
related applications, so it is possible to find examples of games that have already
been tested and whose success has been measured. The process of creating a new
game is not easy, and often this requires that this development be carried out by a
large number of people from different specialties, who work together within the
project. However, the pace of life in society forces these devices to be continually
updated, so it is very necessary to make this game adapt to the new times, adding
new features and optimizing its performance on the devices.

Unlike apps, games are generally not open source, so it is not possible to legally add
new modifications to the game if you are not part of the development team or the
author of one of them.

When a game is published on the Internet on platforms such as Google Play, a


commitment is made to maintain that application, in which developers must correct
as far as possible all errors that are detected, both by users and by the developers
themselves. This mod control is very important to the potential success of a game.

Page74 88
Module 8: Multimedia programming and mobile devices

2. Development of 2D and 3D games

This topic will study the development of 2D and 3D games, their main development
environments, the integration of the game engine into said environment, 3D
programming concepts, their development phases and the properties of objects.
The different applications of both the graphics engine functions and the scene graph
will also be shown, as well as the execution analysis and optimization of the code.

2.1. Development environments for games

Making a game is not an easy task, since it requires knowledge of different


specialties, such as programming, design and animation. To make this task a little
easier there are development environments. These are software platforms that offer
a graphical interface for creating games through the use of a series of tools.

There are different types of environments that are geared towards a type of game,
either 2D or 3D. It is also possible to find different environments depending on the
complexity of the game to be created.

If the goal is to make simple games with an interface that is not very demanding for
2D platforms, it is possible to use environments such as:

• Stencyl: is a platform that allows the creation of 2D games through the use
of code blocks, which help to understand basic programming structures, so it
is not necessary to develop lines of code. Allows you to add images for
characters, which are added to a scene by simply dragging them. It is a
simple and easy-to-use platform.
• Pygame: game development environment using the Python language. It
allows the creation of 2D games. It is based on the use of sprites for
characters and libraries of sound and multimedia resources. Programming is
a bit more complex, since it is necessary to create control structures and
variables through code.

When the game to be developed requires greater graphical power, as is the case
with 3D, it is necessary that the development environments be, in turn, more
complete. Some of the most important ones are:

Page75 88
Module 8: Multimedia programming and mobile devices

• Unity 3D: Today, Unity is one of the most widely used tools in the gaming
world, as well as one of the most highly rated. Unity allows you to export a
game created on any of the different devices. Unity is based on the C#
language. It has its own engine for developing the graphic part, which allows
for a very complete development of all the scenes in a game. It is possible to
configure all the necessary elements, such as: lighting, textures, materials,
characters, terrain, sounds, physics, etc.
• Unreal Engine: Along with Unity 3D, it is one of the best-known and most
valued environments in the world of game development. Allows
configuration and design of advanced graphical resources in the same way as
Unity.

Both environments require a significant level of programming.

2.2. Integrating the game engine into development environments

Once Android has been configured on the device, it is necessary to configure


Android integration within the development environment. In this case the chosen
environment is Unity. This must know where the SDK is in order to compile and
subsequently send the application to the device.

To do this, the first step is, once the project is selected, go to the editing menu and
select preferences. This will display the Unity preferences window, and within it, in
the external tools section, you will be able to view the different compiler
configuration parameters. You can choose the editor for programming the lines of
code, as well as the path where the Android SDK is installed on your computer.

Once this path is selected, Unity will be able to compile an Android project. To
perform the compilation, select the compilation configuration menu. In this window
you can choose which will be the compilation platform, in this case Android. This will
perform the entire graphics rendering process as well as programming for that
platform. At this point the desired scenes will be added to compile. And once the
platform is chosen, the compile option must be selected.

Android does not allow compiling without a bundle identifier, so you will need to

Page76 88
Module 8: Multimedia programming and mobile devices

define such an identifier which will then be used by Google Play for publishing.
Within the project settings section, all the sections of the package will be specified.

These sections are as follows:

• Resolution and submission of the application.


• Application icon.
• Splash image: This will be the image before the game starts once the
application is started.
• Rendering: Setting rendering parameters for Android.
• Identification: In this section you will specify the package identifier, which is
usually the name of the project structure.
• Code version.
• Minimum Android API level.
• Graphics version used: Default app installation location on the device.

Finally, once all these sections have been completed, an .apk (extension for Android
applications) will be generated on the computer, which will be the executable file
that will be installed on the desired device.

Page77 88
Module 8: Multimedia programming and mobile devices

2.3. Advanced 3D programming concepts

The development and programming of a three-dimensional game involves applying


some advanced concepts, such as: movements, physics and collisions. These allow
the game to be as close to a real scene as possible. Unity provides a number of
classes that allow you to define and configure these properties on character and
object models.

This class is called a character controller, and it allows you to apply physics and
collisions to characters in a capsule form. Provides a simple collider. This causes the
character to walk on the ground and not climb walls.

The types of collider that exist are:

• Box collider: This is a cube-shaped collision. These are generally used on


cubic-shaped objects, such as a box or a chest.
• Capsule collider: This is an oval-shaped capsule made up of two semi-
spheres.
• Mesh collider: these are more precise colliders, which are associated with
already designed 3D objects. This allows you to create a collider that is
completely tailored to the shape of the object.
• Sphere collider: is a basic spherical collider. It is usually applied to spherical
objects, such as: balls, stones, etc. This effect has a great impact on objects,
which appear rolling in the scene or that are falling, for example.

Another concept that is frequently used in game development is Unity's particle


system. The objects that are going to be represented in a scene are not always solid
or elements with well-defined shapes. Therefore, when you want to represent fluids
or liquids that are in motion, smoke, clouds, flames, etc., it is necessary to use the
effects provided by the particle system. This particle system is made up of simple
and generally small images, which appear in the scene, repeating continuously in
one direction. This makes them all represent, together, a unique element for the
user. It is necessary to define what the shape of these small images will be, how long
these images will be displayed, and how often and how many they will appear in the
scene.

Finally, another of the concepts, and probably the most complex, is Artificial
Intelligence (AI). This allows Unity to create characters that are capable of
interacting in the scene, even avoiding collisions between its elements. This tool in
Unity is called NavMesh.

Page78 88
Module 8: Multimedia programming and mobile devices

For them, through the agent creation inspector, the properties that will characterize
it are defined, such as:

• Radius: radius that the character will have when moving to avoid
collisions.
• Height: Defines the maximum height of obstacles that the character will be
able to access by passing under them.
• Speed: Maximum speed in units per second that the character will have.
• Acceleration: Acceleration of the character's movement and actions.
• Area: will define the path the character will take and which ones he will not
be able to choose.

2.4. Development phases

Creating a new game requires a large number of tasks that can be grouped into
different phases:

1st) Design phase: this is the step prior to programming. At this stage it is necessary
to determine which will be the relevant aspects of the game, choose the theme and
the development of the story. It is also important to establish what the rules of the
game will be.

Once the story is documented, it is necessary to separate the game into parts. Each
of these parts will make up the game screens. You also have to define what the
menu will look like within the screens, as well as the placement of the objects within
it.

2nd) Code design: in this phase all the layers that make up the game are specified.
It's about separating all the basic aspects of the game from its functionality. This is
what is known as Framework.

The Framework will define how the game windows will be handled. Allows you to
ensure that objects occupy the correct space within the corresponding window. It
will also be responsible for handling user input events. These will, in most cases, be
collected from the computer's keyboard or mouse.

Page79 88
Module 8: Multimedia programming and mobile devices

Another task of the Framework is file management, where reading and writing tasks
will be carried out, such as, for example, saving game preferences and scores.

It will also determine the handling of graphics, which establishes the pixels mapped
to the different screens. It is necessary to determine the position through the
coordinates of each of the pixels, as well as the color. Audio handling, to be able to
play, for example, background music in the game, will also be determined by the
Framework.

3rd) Asset Design: This is one of the most complicated phases and has the greatest
impact on a game. These are the creations of the different elements or models that
can be used within the game, such as: characters, logos, sounds, buttons, fonts, etc.

4th) Game logic design: this is the phase where you define how the game will
behave. The rules already designed will be applied, as well as the programming of
the behavior of each of the game events.

5th) Testing: one of the most important phases. This is when the entire application is
checked to assess the game's performance and the correct implementation of the
rest of the phases.

6th) Game distribution: once all the development phases have been completed, the
objective is for the product to be distributed. To do this, you need to export this
game in the same way as an application.

2.5. Object properties: light, textures, reflection and shadows

Each of the objects represented within a scene has certain properties, which are:

• Light: is what allows us to observe points of illumination within the scene. This
will bring the game to life. Light or a point of light on a particular part of a scene
leads to focusing your attention on it. It will indicate the projection of the
camera within the game. It is possible to add different light points in a scene, as
well as configure the color of a light.

• Textures: reflect the quality with which all the objects that appear within the
game environment can be appreciated. They are part of the textures, for
example, the materials. Within the materials, some can be distinguished such
as: water, metal, wood, fabrics, etc. To achieve a real effect in a texture,
mathematical algorithms called shaders are applied to these materials, which
allow defining what the color of each of the pixels that make up an object will

Page80 88
Module 8: Multimedia programming and mobile devices

be.
• Reflections and shadows: These add a more realistic representation to objects.
To achieve this effect, what is done is to add a kind of outline to the graphic
components. The color of this shadow is defined and what the distance applied
to each object will be. Shadows usually match the projection of light, so that the
shadow appears as an effect of said lighting. With reflections and shadows it is
possible to establish the position that an object occupies within the scene.

These are some of the basic properties of objects. These will be configured
individually for each object depending on the possible interaction within a scene.

Page81 88
Module 8: Multimedia programming and mobile devices

2.6. Application of the graphics engine functions. Rendering

One of the most complex tasks that a graphics engine has is the rendering of the
objects that make up a given scene. The processing of each of them requires a
certain amount of graphic resources that, in most cases, are offered by graphic
cards.

Rendering can be defined as the process of creating a real 2D or 3D image within a


scene, applying a series of filters from a designed model.

Some of the properties that define the rendering process are:

• Size: Defines the size of the render in pixels. In most of


In these cases this will be done on textures.
• Anti-Aliasing: Used to apply a smoothing filter to objects that appear to have
stepped shapes when rendered.
• Depth buffer: responsible for defining the depth of 3D objects in a scene. This
has a huge impact on the quality of the scene produced.
• Wrap mode: used to define the behavior of textures. For example, in a terrain
the repetition of a specific texture is defined that will be applied throughout
the scene.

In environments like Unity, it is possible to set these rendering properties through


the Render Texture function.

Page82 88
Module 8: Multimedia programming and mobile devices

2.7. Application of scene graph functions. Types of nodes and their


use

Unity offers a simple tool for organizing and managing animations called Animator
Controller. This allows you to create a graph of actions within Unity, which allows
you to control all the animations of a character or object. It is possible to establish
an order of execution of the same depending on some of the rules or conditions of
the game.

This, for example, allows you to define the behavior of a character who normally
walks in one direction and who, when you press the space bar, will jump.

Each of these nodes will be the representation of a character action. These will
reflect the transitions between the most basic states.

Its use usually occurs during the use of directional movements of the character,
which are repeated periodically until the occurrence of another of the events. These
movements usually involve walking forward, backward, or diagonally. Other nodes
will define states such as character death, falls or collisions with other objects in the
scene, etc.

Page83 88
Module 8: Multimedia programming and mobile devices

2.8. Execution analysis. Code optimization

During the development of a game it will be necessary to compile and debug the
code many times. Unity offers an integrated IDE (Mono Develop). This will be the
default editor for programming the game code.

When you are editing a file within the project it will appear as a tab. The text editor
allows you to add breakpoints in the margins next to each of the lines of code you
want. Once these breakpoints are selected, debugging of the code begins through
the debug button. This will execute the code, stopping at the first breakpoint found
in the code. This allows you to see the values that all the variables have taken up to
that point.

It is also possible to navigate between the different stopping points to check the
correct behavior of the application.

In case of errors during compilation, Unity contains a log file called Debug.log, where
all messages displayed in the console will be stored. The most common thing is that
if there are errors in the code, Unity itself, when compiling, does not allow the game
to run, and displays a message at the bottom referencing the error or errors found.

Another tool that is useful within the Unity IDE is the Unity Test Runner. This tool
checks programming code for errors before compiling. This can be useful for
correcting syntax errors, for example.

In addition to having all these debugging tools, it is advisable for the developer to
have acquired a series of good programming and code structuring practices.

The code has to be as clean as possible, which will help in the later correction and
improvement of some functions. In projects with extensive code development, this
can pose a very large optimization problem.

Declared functions must be well defined and there should not be multiple functions
whose behavior is the same.

Page84 88
Module 8: Multimedia programming and mobile devices

Literature

The Big Book of Android. Jesus Thomas. Marcombo.

Multimedia programming and mobile devices. Caesar Saint John Pastor. Egret.

Android. Multimedia programming and mobile devices. Ivan Lopez Montaban,


Manuel Martinez Carbonell, Juan Carlos Manrique Hernandez. Egret.

Webgraphy
https://developer.android.com/index.html

http://www.sgoliver.net/blog/android-programming-course/index-of-contents/

Page85 88
already
' I LERNA,
Online j

■function updatePhotoDesciptiont ■ if
(descrptions Nength > (page • 1 \ document
.getElementByK

Item)
■function updateAllmagesg {
var i = 1;
■ while ( < 10) {
\ var elements = photo' + i,
i var elementidBig = big m age + i;
Yes if (page *9+i-1< photos length) (
® \ I document.getElementByldelemertd) Ic-m
W \ \
document.getElementByidele
mentdig) *
} else {

You might also like