Learning
“For the things we have to learn before we can
do them, we learn by doing them.”
― Aristotle, The Nicomachean Ethics
1
Final Presentation
Artificial Intelligence using
Metaheuristic Strategies
Students: Edan Shunem, Itzik Cohen
Supervisor: Kobi Nistel
Context: Project A+B
Semester: Winter, 2016
Date: 03/03/2016
Department of Electrical Engineering
Presentation Outline
• Project Goal
• Requirements
• Chosen Solution
• Background
• Test Problem: TSP Implementation
• Problems and Improvements
• Stock Exchange Implementation
• AI in PC Game
• Conclusions
• References
3
Project Goal
• Build a generic problem solver using the
Genetic Algorithm method.
• Create a multi-threaded Framework for a
Generic Genetic Algorithm.
• Create Test environments and applications.
• Optimize efficiency.
4
Requirements
• The project is implemented using C#
• Graphical illustration using Microsoft XNA & Monogame.
• Create a full Generic GA Framework.
• Create Test problem: Traveling Salesman
• Build solutions for real life problems:
– Stock Exchange investments.
– AI Engine for computer game.
5
Possible Solutions
• Create a dedicated specific solution to each problem.
– Pros: The problem is approached directly, only necessary
calculations will be done, the solution can be very accurate
– Cons: Time consuming, expensive, complex, prone to human
error or disregard of important parameters.
• Calculating every possible solution.
– Pros: Guaranteed to find the best solution.
– Cons: Not efficient (time), Impossible for complex problems.
Example: TSP problem with 30 Cities will take 280 Billion years
6
Chosen Solution
• Generic Genetic algorithm optimization.
– Pros: Generic! Fast, low memory requirements, Easy
(not necessary to solve analytically), can find best or
good enough solution for various problems.
– Cons: Randomized – not optimal or complete,
Can get stuck on local maxima, requires
implementation of fitness and mutate functions.
7
Background
• A genetic algorithm is a search heuristic that mimics the
process of natural selection.
– Inheritance, Mutation, Crossover and Selection.
• Basic idea:
– Simulate natural selection, where the population is composed of
candidate solutions
– Evolving a population from which strong and diverse candidates
can emerge via mutation and crossover (mating).
8
Background
• The Genetic algorithm method:
– Create an initial population, either random or trivial.
– While the best candidate so far is not a solution:
• Create new population using successor functions.
• Evaluate the fitness of each candidate in the population.
– Return the best candidate found
9
Initialize
Population
Evaluate Selection
Mutation,
Crossover
Termination?
Optimized
Solution
No
Yes
GA: Applications
• Optimization for problems with multiple parameters.
• Examples:
– Traveling salesman problem.
– Scheduling air traffic.
– Nasa’s ST5 Antenna
– Artificial Intelligence Engine.
– Realtime problem solving (Space Robot).
10
Why Generic?
• One framework to solve different types of problems.
• Not only mathematical or analytical problems which can be
described by a mathematical function.
• The user will write the necessary interface functions relevant to his
problem.
• Solve abstract objects with many degrees of freedom.
• Constraint: Has to have a fitness function to determine grade.
• Cons: Harder to Optimize
11
GA Framework: UML
12
External implementations
of the Evaluation and
ITrainable functions
Main creates the problem
and runs the GA
Cluster Training the
population until terminated
GA Framework: Details
• ITrainables: Interface functions required for a trainable object
(implemented externally).
– Mutate – Mutate the object’s properties.
– Cross – Cross properties between 2 objects.
– GetGrade – Retrieves the object’s grade.
– SetGrade - Sets the object’s grade.
– Clone – Create a cloned object.
• IEvaluation: The fitness function needed to evaluate every object’s grade.
• Cluster: The GA core performing the GA strategy.
13
Traveling Salesman Problem
• Definition:
– Given a list of cities and the distances between each pair of cities, what is the shortest
possible route that visits each city exactly once and returns to the origin city?
• Computational complexity is NP-Hard
– NP: Class of computational problems for which a given solution can be verified as a
solution in polynomial time by a deterministic Turing machine.
– NP-Hard: Class of problems which are at least as hard as the hardest problems in NP.
Problems in NP-hard do not have to be elements of NP, indeed, they may not even be
decidable problems
14
TSP: UML Diagram
15
TSP: Details
• SalesmanProblem: Determines the following:
– Number of cities.
– Location of every city (from file or random).
– Distance matrix.
• SalesmanEvaluation: Evaluates the distance for a
specific path.
• SalesmanSolution: Creates a random path and
defines the interface functions for the
framework.
– Mutate: Switches the location of 2 random cities.
– Cross: Smart Cross between 2 solutions.
16
Results
• TSP problem with 17 cities.
– Optimal score achieved after 103 iterations.
17
10
0
10
1
10
2
10
3
10
4
10
5
2000
2200
2400
2600
2800
3000
3200
3400
3600
Iteration
Grade
Runtime Analysis for 17 Cities
Results
• TSP problem with 26 cities.
– Optimal score achieved after 193 iterations.
18
10
0
10
1
10
2
10
3
10
4
10
5
900
950
1000
1050
1100
1150
Iteration
Grade
Runtime Analysis for 26 Cities
Results
• TSP problem with 42 cities.
– Close to Optimal score achieved after 1852 iterations.
(708 instead of 699).
19
10
0
10
1
10
2
10
3
10
4
10
5
500
1000
1500
2000
2500
3000
Iteration
Grade
Runtime Analysis for 42 Cities
Results
• Node selection visualization for TSP with 20
Cities.
20
Results
• TSP problem with 48 cities.
– Non-Optimal score achieved after 2105 iterations.
– Stuck on local minima! (37,000 instead of 10,000)
21 10
0
10
1
10
2
10
3
10
4
10
5
2
4
6
8
10
12
14
16
x 10
4
Iteration
Grade
Runtime Analysis for 48 Cities
Problem Encountered
• The algorithm randomly gets stuck on a local
maxima.
– This is due to the fact we have different initial
conditions as the solutions are random.
22
Improvements
• Implementation of Multi-Threading.
• Runtime Troubleshooting of “Stuck” threads
– Inserting random solutions.
– Changing mutation factor (increase mutation).
– Reversing grading method to distance local
maxima.
• Improve Cross function for TSP.
• Improve efficiency by saving all time best.
23
Results
• TSP problem with 50 Random cities.
– Mutate factor Troubleshoot
24 10
0
10
1
10
2
10
3
10
4
10
5
0.6
0.8
1
1.2
1.4
1.6
1.8
2
2.2
x 10
4
Iteration
Grade
Runtime Analysis for 50 Random Cities with Mutate TS
Stuck on local minima
Mutate Troubleshoot
10
0
10
1
10
2
10
3
10
4
10
5
0.6
0.8
1
1.2
1.4
1.6
1.8
2
2.2
2.4
x 10
4
Iteration
Grade
Runtime Analysis for 50 Random Cities with RandSol TS
Results
• TSP problem with 50 Random cities.
– Random solutions Troubleshoot
26
Stuck on local minima
RandSol Troubleshoot
Best Score!
Results
• TSP Genetic VS Greedy
27
Greedy is Better
Stocks Investments
• Goal:
– Create a Stock Broker capable of investing in 3 stocks and turn a profit.
– Use A Neural Network “Brain”.
• Train Set:
– 4 years of stocks data for 3 stocks (Intel, Nasdaq, Apple)
• Test Set:
– 1 year of stocks data for the same 3 stocks (Intel, Nasdaq, Apple)
• Harder Test Set:
– 3 years of stocks data for 3 stocks (Yelp, Nasdaq, Apple)
28
Neural Network
• Estimate or approximate functions that can
depend on a large number of inputs and are
generally unknown.
• System of interconnected “Neurons“
– Can compute values from inputs.
– Capable of machine learning.
– Pattern recognition.
29
Stock Exchange: UML Diagram
30
Stock Exchange: Details
• StockExchangeProblem: Determines the following:
– Initial Money.
– Stocks Values.
– Determines Train set and Test set.
• StockExchangeSolution: Creates a Neural Network brain
with different parameters.
– Mutate: mutates the NN functions and changes the net data.
– Cross: merges two NN.
• StockExchangeEvaluation:
– Updates the stocks portfolio and cash money according to the
NN commands and decisions.
– Defines constraints for the decision making.
– Calculates the worth and profit.
31
Stock Exchange: Results
• Initial Conditions:
– Cash: 1000$
– Number of stocks: 0
• Data Sets:
– Train Set Data: 4 years (80%)
– Test Set Data: 1 year (20%)
– Hard Test set Data: 3 years (stock differ)
• Results:
– Train set Profit: 2800% - 3000%
– Test set Profit: 10% - 20%
– Hard Test set Profit: 5% - 10% (and higher)
32
0
5000
10000
15000
20000
25000
30000
35000
1 5 9 13172125293337414549535761656973778185899397
Profit($)
Iteration
Train Set
Results
35
0
2000
4000
6000
8000
10000
12000
14000
1
6
11
16
21
26
31
36
41
46
51
56
61
66
71
76
81
86
91
96
101
106
111
116
121
126
131
136
141
146
151
156
161
166
171
176
181
186
191
196
201
206
211
216
221
226
231
236
241
246
251
Cash Money VS Worth
Money Worth
20% Profit!
Results (Behavior)
36
0
20
40
60
80
100
120
140
1
6
11
16
21
26
31
36
41
46
51
56
61
66
71
76
81
86
91
96
101
106
111
116
121
126
131
136
141
146
151
156
161
166
171
176
181
186
191
196
201
206
211
216
221
226
231
236
241
246
251
Stocks Value
Value 1 Value 2 Value 3
0
20
40
60
80
100
120
140
1
6
11
16
21
26
31
36
41
46
51
56
61
66
71
76
81
86
91
96
101
106
111
116
121
126
131
136
141
146
151
156
161
166
171
176
181
186
191
196
201
206
211
216
221
226
231
236
241
246
251
Quantity of Stocks
Stock1 Stock2 Stock3
AI for PC Game
• Goal:
– Design Artificial Intelligence engine using a neural network to create a
“Combat Brain” for spaceships.
– Train 2D Spaceships to fight each other in generic scenarios and while
equipped with different items.
• Conditions:
– Initial population – 100 spaceships
– Generation size – 60 spaceships
• 30% previous generation, 25% mutations, 25% cross, 20% random
– Number of iterations - 20
37
AI: Sensors
• Neural network inputs were provided by
smart dedicated.
– Distance from Enemy
– Distance from Danger
– Angle from Enemy
– Angle from Danger
– Distance to Enemy angle
– Distance to Danger angle
• Enemy and AI hitpoints are referenced in Eval.
38
AI: Class Layout
39
Game Engine
Training
Sensor
Test1v1 TestSimulations
Genetic
Algorithm
Evaluation
Container
Eval Eval Eval Interface Core
Neural
Network
Control
NN
Sensors
Output
Brain
Output
Brain
Optimal
Brain
Output
Brain
AI: Details
• Game Engine: Runs the game.
• Levels: Each level is designed for different
functionality.
– Training: Creates brains using GA and evaluation.
• GA – The genetic algorithm framework.
• Evaluation container – holds different evaluations to be
performed by GA interface.
• Control NN – NN with inputs according to Sensors, used by
evaluations.
– Sensor Test: Test the sensors used in NN.
– 1v1 Test: Test output brain vs AI or human.
– Simulation: Runs every generation’s brain
continuously.
40
AI: Results
• Evaluated AI was tested using different types
of spaceships (with different capabilities).
• Optimization was quick and by the 5th
generation most spaceships won the match.
• A candidate score was determined by the
damage dealt and life remained.
41
AI: Simulation Results
42
-1000
-500
0
500
1000
1 2 3 4 5 6 7 8 9 10 11
Score
Generation
Ship1 Ship2 Ship3 Ship4
AI: Simulation Video
43
Conclusions
• The Genetic algorithm method showed to be
efficient in solving and optimizing problems.
• This type of optimization is good for
complicated unpredicted problems where a
close to optimal solutions is required.
• The generic interface will allow this
framework to be used for various problems.
44
Conclusions
• The TSP Test Problem showed good results for
small number of cities (relatively).
– For very complicated problems we reached optimal
solutions.
– For greater number of cities the Greedy algorithm
proved to be better (even though not optimal).
• The Neural Network brain for the Stock Exchange
proved to be very successful for both Test sets.
– We got rich! 
• The AI for the spaceships PC game quickly
developed fighting skills and won matches.
45
References
• David E. Goldberg, The Design of Innovation: Lessons from and for
Competent Genetic Algorithms, Kluwer Academic Publishers, 2002.
• Jinn-Tsong Tsai, Tung-Kuan Liu, and Jyh-Horng Chou, Hybrid Taguchi-
Genetic Algorithm for Global Numerical Optimization, IEEE Transactions on
evolutionary computation, 2004.
• Jean-Michel Renders and Stphane P. Flasse, Hybrid Methods Using Genetic
Algorithms for Global Optimization, IEEE Transactions on systems, man
and cybernetics, 1996
• Robert Ghanea-Hercock, Applied Evolutionary Algorithms in Java, Springer,
2003
• Darrel Whitley, A Genetic Algorithm Tutorial, Colorado state university,
1994
• Wikipedia: Genetic Algorithm, Metaheuristic, Heuristic, TSP.
• http://www.nasdaq.com/quotes/historical-quotes.aspx
46
Questions?
47

More Related Content

PPTX
Machine Learning Essentials Demystified part1 | Big Data Demystified
PPTX
Machine Learning Essentials Demystified part2 | Big Data Demystified
PPTX
Reinforcement Learning and Artificial Neural Nets
PDF
QMIX: monotonic value function factorization paper review
PDF
Distributed Deep Q-Learning
PPTX
Deep Reinforcement Learning
PDF
Generalized Reinforcement Learning
PDF
Reinforcement learning
Machine Learning Essentials Demystified part1 | Big Data Demystified
Machine Learning Essentials Demystified part2 | Big Data Demystified
Reinforcement Learning and Artificial Neural Nets
QMIX: monotonic value function factorization paper review
Distributed Deep Q-Learning
Deep Reinforcement Learning
Generalized Reinforcement Learning
Reinforcement learning

Similar to Final Presentation - Edan&Itzik (20)

PPTX
Keynote at IWLS 2017
PDF
Qualcomm Webinar: Solving Unsolvable Combinatorial Problems with AI
PDF
30thSep2014
PDF
Automated Testing of Autonomous Driving Assistance Systems
PPTX
Cahall Final Intern Presentation
PPTX
Introduction to Genetic algorithm and its significance in VLSI design and aut...
PPTX
An Introduction to Deep Learning
PPTX
Introduction to Deep Learning
PDF
Connected Components Labeling
PDF
Combinatorial optimization and deep reinforcement learning
PPTX
Computer vision-nit-silchar-hackathon
PPTX
Parallel Machine Learning- DSGD and SystemML
PDF
2016 VLDB - The iBench Integration Metadata Generator
PPT
lec6a.ppt
PPTX
Unsupervised Learning: Clustering
PDF
AI-ASSISTED METAMORPHIC TESTING FOR DOMAIN-SPECIFIC MODELLING AND SIMULATION
PPTX
Introduction to computer vision with Convoluted Neural Networks
PPTX
presentation of IntroductionDeepLearning.pptx
PDF
Machine learning for IoT - unpacking the blackbox
PDF
Testing Dynamic Behavior in Executable Software Models - Making Cyber-physica...
Keynote at IWLS 2017
Qualcomm Webinar: Solving Unsolvable Combinatorial Problems with AI
30thSep2014
Automated Testing of Autonomous Driving Assistance Systems
Cahall Final Intern Presentation
Introduction to Genetic algorithm and its significance in VLSI design and aut...
An Introduction to Deep Learning
Introduction to Deep Learning
Connected Components Labeling
Combinatorial optimization and deep reinforcement learning
Computer vision-nit-silchar-hackathon
Parallel Machine Learning- DSGD and SystemML
2016 VLDB - The iBench Integration Metadata Generator
lec6a.ppt
Unsupervised Learning: Clustering
AI-ASSISTED METAMORPHIC TESTING FOR DOMAIN-SPECIFIC MODELLING AND SIMULATION
Introduction to computer vision with Convoluted Neural Networks
presentation of IntroductionDeepLearning.pptx
Machine learning for IoT - unpacking the blackbox
Testing Dynamic Behavior in Executable Software Models - Making Cyber-physica...
Ad

Final Presentation - Edan&Itzik

  • 1. Learning “For the things we have to learn before we can do them, we learn by doing them.” ― Aristotle, The Nicomachean Ethics 1
  • 2. Final Presentation Artificial Intelligence using Metaheuristic Strategies Students: Edan Shunem, Itzik Cohen Supervisor: Kobi Nistel Context: Project A+B Semester: Winter, 2016 Date: 03/03/2016 Department of Electrical Engineering
  • 3. Presentation Outline • Project Goal • Requirements • Chosen Solution • Background • Test Problem: TSP Implementation • Problems and Improvements • Stock Exchange Implementation • AI in PC Game • Conclusions • References 3
  • 4. Project Goal • Build a generic problem solver using the Genetic Algorithm method. • Create a multi-threaded Framework for a Generic Genetic Algorithm. • Create Test environments and applications. • Optimize efficiency. 4
  • 5. Requirements • The project is implemented using C# • Graphical illustration using Microsoft XNA & Monogame. • Create a full Generic GA Framework. • Create Test problem: Traveling Salesman • Build solutions for real life problems: – Stock Exchange investments. – AI Engine for computer game. 5
  • 6. Possible Solutions • Create a dedicated specific solution to each problem. – Pros: The problem is approached directly, only necessary calculations will be done, the solution can be very accurate – Cons: Time consuming, expensive, complex, prone to human error or disregard of important parameters. • Calculating every possible solution. – Pros: Guaranteed to find the best solution. – Cons: Not efficient (time), Impossible for complex problems. Example: TSP problem with 30 Cities will take 280 Billion years 6
  • 7. Chosen Solution • Generic Genetic algorithm optimization. – Pros: Generic! Fast, low memory requirements, Easy (not necessary to solve analytically), can find best or good enough solution for various problems. – Cons: Randomized – not optimal or complete, Can get stuck on local maxima, requires implementation of fitness and mutate functions. 7
  • 8. Background • A genetic algorithm is a search heuristic that mimics the process of natural selection. – Inheritance, Mutation, Crossover and Selection. • Basic idea: – Simulate natural selection, where the population is composed of candidate solutions – Evolving a population from which strong and diverse candidates can emerge via mutation and crossover (mating). 8
  • 9. Background • The Genetic algorithm method: – Create an initial population, either random or trivial. – While the best candidate so far is not a solution: • Create new population using successor functions. • Evaluate the fitness of each candidate in the population. – Return the best candidate found 9 Initialize Population Evaluate Selection Mutation, Crossover Termination? Optimized Solution No Yes
  • 10. GA: Applications • Optimization for problems with multiple parameters. • Examples: – Traveling salesman problem. – Scheduling air traffic. – Nasa’s ST5 Antenna – Artificial Intelligence Engine. – Realtime problem solving (Space Robot). 10
  • 11. Why Generic? • One framework to solve different types of problems. • Not only mathematical or analytical problems which can be described by a mathematical function. • The user will write the necessary interface functions relevant to his problem. • Solve abstract objects with many degrees of freedom. • Constraint: Has to have a fitness function to determine grade. • Cons: Harder to Optimize 11
  • 12. GA Framework: UML 12 External implementations of the Evaluation and ITrainable functions Main creates the problem and runs the GA Cluster Training the population until terminated
  • 13. GA Framework: Details • ITrainables: Interface functions required for a trainable object (implemented externally). – Mutate – Mutate the object’s properties. – Cross – Cross properties between 2 objects. – GetGrade – Retrieves the object’s grade. – SetGrade - Sets the object’s grade. – Clone – Create a cloned object. • IEvaluation: The fitness function needed to evaluate every object’s grade. • Cluster: The GA core performing the GA strategy. 13
  • 14. Traveling Salesman Problem • Definition: – Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city exactly once and returns to the origin city? • Computational complexity is NP-Hard – NP: Class of computational problems for which a given solution can be verified as a solution in polynomial time by a deterministic Turing machine. – NP-Hard: Class of problems which are at least as hard as the hardest problems in NP. Problems in NP-hard do not have to be elements of NP, indeed, they may not even be decidable problems 14
  • 16. TSP: Details • SalesmanProblem: Determines the following: – Number of cities. – Location of every city (from file or random). – Distance matrix. • SalesmanEvaluation: Evaluates the distance for a specific path. • SalesmanSolution: Creates a random path and defines the interface functions for the framework. – Mutate: Switches the location of 2 random cities. – Cross: Smart Cross between 2 solutions. 16
  • 17. Results • TSP problem with 17 cities. – Optimal score achieved after 103 iterations. 17 10 0 10 1 10 2 10 3 10 4 10 5 2000 2200 2400 2600 2800 3000 3200 3400 3600 Iteration Grade Runtime Analysis for 17 Cities
  • 18. Results • TSP problem with 26 cities. – Optimal score achieved after 193 iterations. 18 10 0 10 1 10 2 10 3 10 4 10 5 900 950 1000 1050 1100 1150 Iteration Grade Runtime Analysis for 26 Cities
  • 19. Results • TSP problem with 42 cities. – Close to Optimal score achieved after 1852 iterations. (708 instead of 699). 19 10 0 10 1 10 2 10 3 10 4 10 5 500 1000 1500 2000 2500 3000 Iteration Grade Runtime Analysis for 42 Cities
  • 20. Results • Node selection visualization for TSP with 20 Cities. 20
  • 21. Results • TSP problem with 48 cities. – Non-Optimal score achieved after 2105 iterations. – Stuck on local minima! (37,000 instead of 10,000) 21 10 0 10 1 10 2 10 3 10 4 10 5 2 4 6 8 10 12 14 16 x 10 4 Iteration Grade Runtime Analysis for 48 Cities
  • 22. Problem Encountered • The algorithm randomly gets stuck on a local maxima. – This is due to the fact we have different initial conditions as the solutions are random. 22
  • 23. Improvements • Implementation of Multi-Threading. • Runtime Troubleshooting of “Stuck” threads – Inserting random solutions. – Changing mutation factor (increase mutation). – Reversing grading method to distance local maxima. • Improve Cross function for TSP. • Improve efficiency by saving all time best. 23
  • 24. Results • TSP problem with 50 Random cities. – Mutate factor Troubleshoot 24 10 0 10 1 10 2 10 3 10 4 10 5 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 x 10 4 Iteration Grade Runtime Analysis for 50 Random Cities with Mutate TS Stuck on local minima Mutate Troubleshoot
  • 25. 10 0 10 1 10 2 10 3 10 4 10 5 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 x 10 4 Iteration Grade Runtime Analysis for 50 Random Cities with RandSol TS Results • TSP problem with 50 Random cities. – Random solutions Troubleshoot 26 Stuck on local minima RandSol Troubleshoot Best Score!
  • 26. Results • TSP Genetic VS Greedy 27 Greedy is Better
  • 27. Stocks Investments • Goal: – Create a Stock Broker capable of investing in 3 stocks and turn a profit. – Use A Neural Network “Brain”. • Train Set: – 4 years of stocks data for 3 stocks (Intel, Nasdaq, Apple) • Test Set: – 1 year of stocks data for the same 3 stocks (Intel, Nasdaq, Apple) • Harder Test Set: – 3 years of stocks data for 3 stocks (Yelp, Nasdaq, Apple) 28
  • 28. Neural Network • Estimate or approximate functions that can depend on a large number of inputs and are generally unknown. • System of interconnected “Neurons“ – Can compute values from inputs. – Capable of machine learning. – Pattern recognition. 29
  • 29. Stock Exchange: UML Diagram 30
  • 30. Stock Exchange: Details • StockExchangeProblem: Determines the following: – Initial Money. – Stocks Values. – Determines Train set and Test set. • StockExchangeSolution: Creates a Neural Network brain with different parameters. – Mutate: mutates the NN functions and changes the net data. – Cross: merges two NN. • StockExchangeEvaluation: – Updates the stocks portfolio and cash money according to the NN commands and decisions. – Defines constraints for the decision making. – Calculates the worth and profit. 31
  • 31. Stock Exchange: Results • Initial Conditions: – Cash: 1000$ – Number of stocks: 0 • Data Sets: – Train Set Data: 4 years (80%) – Test Set Data: 1 year (20%) – Hard Test set Data: 3 years (stock differ) • Results: – Train set Profit: 2800% - 3000% – Test set Profit: 10% - 20% – Hard Test set Profit: 5% - 10% (and higher) 32 0 5000 10000 15000 20000 25000 30000 35000 1 5 9 13172125293337414549535761656973778185899397 Profit($) Iteration Train Set
  • 33. Results (Behavior) 36 0 20 40 60 80 100 120 140 1 6 11 16 21 26 31 36 41 46 51 56 61 66 71 76 81 86 91 96 101 106 111 116 121 126 131 136 141 146 151 156 161 166 171 176 181 186 191 196 201 206 211 216 221 226 231 236 241 246 251 Stocks Value Value 1 Value 2 Value 3 0 20 40 60 80 100 120 140 1 6 11 16 21 26 31 36 41 46 51 56 61 66 71 76 81 86 91 96 101 106 111 116 121 126 131 136 141 146 151 156 161 166 171 176 181 186 191 196 201 206 211 216 221 226 231 236 241 246 251 Quantity of Stocks Stock1 Stock2 Stock3
  • 34. AI for PC Game • Goal: – Design Artificial Intelligence engine using a neural network to create a “Combat Brain” for spaceships. – Train 2D Spaceships to fight each other in generic scenarios and while equipped with different items. • Conditions: – Initial population – 100 spaceships – Generation size – 60 spaceships • 30% previous generation, 25% mutations, 25% cross, 20% random – Number of iterations - 20 37
  • 35. AI: Sensors • Neural network inputs were provided by smart dedicated. – Distance from Enemy – Distance from Danger – Angle from Enemy – Angle from Danger – Distance to Enemy angle – Distance to Danger angle • Enemy and AI hitpoints are referenced in Eval. 38
  • 36. AI: Class Layout 39 Game Engine Training Sensor Test1v1 TestSimulations Genetic Algorithm Evaluation Container Eval Eval Eval Interface Core Neural Network Control NN Sensors Output Brain Output Brain Optimal Brain Output Brain
  • 37. AI: Details • Game Engine: Runs the game. • Levels: Each level is designed for different functionality. – Training: Creates brains using GA and evaluation. • GA – The genetic algorithm framework. • Evaluation container – holds different evaluations to be performed by GA interface. • Control NN – NN with inputs according to Sensors, used by evaluations. – Sensor Test: Test the sensors used in NN. – 1v1 Test: Test output brain vs AI or human. – Simulation: Runs every generation’s brain continuously. 40
  • 38. AI: Results • Evaluated AI was tested using different types of spaceships (with different capabilities). • Optimization was quick and by the 5th generation most spaceships won the match. • A candidate score was determined by the damage dealt and life remained. 41
  • 39. AI: Simulation Results 42 -1000 -500 0 500 1000 1 2 3 4 5 6 7 8 9 10 11 Score Generation Ship1 Ship2 Ship3 Ship4
  • 41. Conclusions • The Genetic algorithm method showed to be efficient in solving and optimizing problems. • This type of optimization is good for complicated unpredicted problems where a close to optimal solutions is required. • The generic interface will allow this framework to be used for various problems. 44
  • 42. Conclusions • The TSP Test Problem showed good results for small number of cities (relatively). – For very complicated problems we reached optimal solutions. – For greater number of cities the Greedy algorithm proved to be better (even though not optimal). • The Neural Network brain for the Stock Exchange proved to be very successful for both Test sets. – We got rich!  • The AI for the spaceships PC game quickly developed fighting skills and won matches. 45
  • 43. References • David E. Goldberg, The Design of Innovation: Lessons from and for Competent Genetic Algorithms, Kluwer Academic Publishers, 2002. • Jinn-Tsong Tsai, Tung-Kuan Liu, and Jyh-Horng Chou, Hybrid Taguchi- Genetic Algorithm for Global Numerical Optimization, IEEE Transactions on evolutionary computation, 2004. • Jean-Michel Renders and Stphane P. Flasse, Hybrid Methods Using Genetic Algorithms for Global Optimization, IEEE Transactions on systems, man and cybernetics, 1996 • Robert Ghanea-Hercock, Applied Evolutionary Algorithms in Java, Springer, 2003 • Darrel Whitley, A Genetic Algorithm Tutorial, Colorado state university, 1994 • Wikipedia: Genetic Algorithm, Metaheuristic, Heuristic, TSP. • http://www.nasdaq.com/quotes/historical-quotes.aspx 46

Editor's Notes

  • #2: הפילוסוף היווני אריסטו היה ידוע בתור אחד שדחה את תורת הצורות של אפלטון לפיה המציאות שלנו היא מציאות אובייקטיבית נעלה יותר ממה שאנו חשים וראה בקיום העולם החומרי - הקיום הממשי והיחיד כך שהדרך להבין את המציאות היא ע"י התנסות אמפירית. אריסטו כבודו במקומו מונח, אך מה אם יש לנו מרחב כ"כ רחב של פתרונות??? לעולם לא נוכל לפתור אותה בזמן סביר.... (כאן בא לידי ביטוי הפרויקט שלנו הפותר בעיות בעזרת היורסטיקה, ננסה להעריך ולצמצם את אזור החיפוש למקום בו יש פוטנציאל גדול יותר לפתרון טוב יותר) היורסטיקה- מילה שמקורה ביוונית שפירושה גילוי. זוהי בעצם שיטה לקבלת החלטות או פתרון בעיות שבה מקבל ההחלטה מגיע למסקנה ללא מערכת עקבית של אפשרויות.
  • #3: הצגה עצמית
  • #6: AL-Artificial intelligence
  • #7: עבור מכונה המחשבת 1000 MIPS
  • #8: הייחודיות באלגוריתם שלנו זה שהוא גנרי
  • #9: הורשה – הורשת תכונות מדור פתרונות אחד לדור הבא: רק פתרונות "מוצלחים" יורישו את תכונותיהם. מוטציה – על מנת לשפר כל דור, תתבצע מוטציה בפתרון (שינוי תכונות הפתרון) באופן אקראי. שחלוף – מיזוג תכונות בין 2 פתרונות מוצלחים. סלקציה – אחרי כל דור, תתבצע אבלואציה שתציינן כל פתרון ופתרון ורק פתרונות מוצלחים יורישו לדור הבא.
  • #11: -בעית ה-TSP היא בעיה קלאסית בה פתרון אנליטי הוא בלתי אפשרי עבור מספר רב של ערים. -תכנון מסלולי טיסה היא בעיה דומה אשר מטרתה למזער זמן טיסה ואורך המסלול אך גם לשמור על מרחק בין מטוסים בכל רגע נתון. -אנטנת ST5 של NASA תוכננה באמצעות אלגוריתם אופטימיזציה אשר מצא את האנטנה האופטימלית העונה למגבלות הבעיה וממקסמת את יעילותה בהתאם לשימוש שהוגדר לה. ניתן לראות שתכנון כזה לא היה אפשרי לביצוע על ידי בן אדם. -עקב העובדה שהאלגוריתם גנרי ניתן להתאימו למגוון רחב של בעיות, ולצורך העניין לייצר בינה מלאכותית למשחקי מחשב כאשר האלגוריתם יכול ממש "לשחק" את המשחק עד אשר ילמד מהי ההתנהגות האופטימלית. -כחלק ממטרות הפרויקט אנו נייצר ממשק לאופטימיזציה של אלגוריתם של אינטל לזיהוי עצמים במצלמת ה-RealSense. עבור רובוטיקה שאינה נגישה ליד אדם, כאשר יש בעיה באחד הרכיבים הרובוט יכול ללמוד בעצמו איך להתנהל אופטימלית ללא אותו רכיב (נניח ללא גלגל אחד).
  • #12: מה יהיה גנרי? הגדרת הבעיה פונקצית האבלואציה פונקצית המוטציה, פונקצית הקרוס
  • #13: יש ליצור ב MAIN את פונקציות המשתמש המגדירות את הבעיה (אדום), למשל אצלנו SalesmanProblem המגדיר את הבעיה שיש לפתור, את אופן האבלואציה של הפתרון (כחול) למשל אצלנו SalesmanEval ואת הפתרונות של הדור הראשון ואופן ה"אימון" שלהם (בירוק). ה-MAIN מפעיל את ה-TRAINER באפור - Genetic Algoritm Module: כעת יש בידינו את הבעיה, מגוון פתרונות התחלתיים, כיצד לבצע אבלואציה ומימוש של פונקציות האלגוריתם הגנטי לכן כל מה שנותר לנו זה "לאמן" את הפתרונות שלנו ע"י כל הכלים שיש לנו עד עכשיו, את כל זה אנחנו עושים ב TRAINER. המחלקות Itrainable ו Ievaluatin הם מחלקות ממשק שאיתם הטריינר יודע לעבוד, איתם או מבצע אבלואציה ומאמן את הפתרונות! TRAINER- בטריינר יש לנו את ההגדרות לאימון שנקבעות ע"י המשתמש ומאותחלו בקונסטרקטור ( בMAIN): לכמה דורות לבצע אימון, סוג הבעיה (מקסימום או מינימום), הפסקת האלגוריתם עד להגעה לפתרון יעד, הקונסטרקטור מקבל בנוסף את קבוצת הפתרונות ההתחלתיים מטיפוס Itrainable ואת אובייקט האבלואציה מ Ievaluation ,בעזרתם ובעזרת מטודות עזר למיון פתרונות אנו מבצעים אופטמיזציה לפתרון הבעיה הגנרית.
  • #16: באדום -SalesmanProblem : ב Point2 אנו מגדירים נקודה במרחב ע"י ערכי שני קורדינאטות x,y ופונקציית מרחק בין 2 נק'. SalesmanProblem משתמשת ב Point2 כך שהיא מחזיקה מערך של נקודות כך שבעזרת המטודה של ה Point2 אנו יוצרים מטריצת מרחקים סימטרית בין כל 2 ערים שזוהי הגדרת הבעיה. בקונסטרקטור של SalesmanProblem אנו מאתחלים את מספר הערים ובוחרים האם ליצור מטריצת מרחקים רנדומאלית (בעיה רנדומלית) או לטעון מטריצת מרחקים מקובץ קיים. משתנה חשוב הוא ה salsemanEval שהוא מטיפוס salsemanEval, מחלקת salsemanEval (בכחול) היא בעצם האבלואציה שצריך לעשות על הבעיה, לכל בעיה אחרת דרוש מחלקה שתירש ממחלקת האב Ievaluation שתגיד ל GA כיצד לבצע אבלואציה. אצלנו למשל צריך לקחת פתרון כולשהוא שהוא בעצם רצף של ערים (מערך) והאבלואציה היא המרחק בין כל הערים הלוך וחזור. בכחול - SalsemanEval: SalsemanEval- המחלקה הזו בעצם אחראית לומר לאלגוריתם כיצד לבצע אבלואציה, המחלקה יורשת מ IEvaluation ואצלנו בעצם היא מחשבת את סכום המרחקים בין עיר לעיר המיוצגת ע"י מטריצת המרחקים שמתקבל ומעודכן באמצעות הבנאי של המחלקה. בירוק – SalesmanSolution: מחלקה זו מגדירה לנו פתרון כלשהוא המוחזק ע"י המערך Path[] שאומר לנו מהו סדר הערים בו אנו צריכים לעבור כדי להגיע לפתרון "אופטימלי". בפונקציית ה MAIN (שלא משורטטת!) אנחנו יוצרים גודל אוכלוסייה בעלת מספר כלשהוא המוגדר ע"י המשתמש שזהו מספר הפתרונות האפשריים ההתחלתיים, אלו פתרונות רנדומליים,פתרונות אלו מועברים אל ה TRAINER על מנת לבצע אופטמיזציה לאכלוסייה ההתחלתית. מספר הערים מאותחל בקונסטרקטור ושם גם בעצם מתבצעת פתרון אופציונלי ע"י פתרון טרוויאלי (ללכת מעיר לעיר בסדר עולה 1->2->3...->N) ומבלגנת את הסדר בין הערים ע"י המתודה Shuffel שנמצאת בצבע חום במחלקה RandomExtenstions, מה שמיוחד במימוש של shuffle זה שהוא יודע "לטרוף" את הפתרון רנדומאלית ב O(numberOfCities). בנוסף במחלקה זו יש את המימוש של המטודות עבור האלגוריתם הגנטי: מוטציה- אצלנו אנחנו לוקחים 2 ערים כלשהן ומחליפים בינהם. קרוס- אצלנו זה החלפה של פתרון כולשהוא עם פתרון אחר לגמריי. קלון- העתקת אובייקט (פתרון).
  • #22: זמן הריצה כולו היה 20,000 איטרציות.
  • #23: הערה פנימית!: הוספת פתרונות רנדומליים לא שיפרה את תוצאות האלגוריתם באופן משמעותי וכן גרמו להאטת זמן הריצה
  • #47: Wikipedia: Thread pool pattern