MySQL Notes For Professionals
MySQL Notes For Professionals
MySQL
Notes for Professionals
™
100+ pages
of professional hints and tricks
Disclaimer
GoalKicker.com This is an unocial free book created for educational purposes and is
not aliated with ocial MySQL™ group(s) or company(s).
Free Programming Books All trademarks and registered trademarks are
the property of their respective owners
Contents
About ................................................................................................................................................................................... 1
Chapter 1: Getting started with MySQL ............................................................................................................. 2
Section 1.1: Getting Started ........................................................................................................................................... 2
Section 1.2: Information Schema Examples ................................................................................................................ 6
Chapter 2: Data Types ............................................................................................................................................... 7
Section 2.1: CHAR(n) ...................................................................................................................................................... 7
Section 2.2: DATE, DATETIME, TIMESTAMP, YEAR, and TIME ................................................................................... 7
Section 2.3: VARCHAR(255) -- or not .......................................................................................................................... 8
Section 2.4: INT as AUTO_INCREMENT ...................................................................................................................... 8
Section 2.5: Others ........................................................................................................................................................ 8
Section 2.6: Implicit / automatic casting ..................................................................................................................... 9
Section 2.7: Introduction (numeric) ............................................................................................................................. 9
Section 2.8: Integer Types .......................................................................................................................................... 10
Section 2.9: Fixed Point Types ................................................................................................................................... 10
Section 2.10: Floating Point Types ............................................................................................................................. 10
Section 2.11: Bit Value Type ........................................................................................................................................ 11
Chapter 3: SELECT ...................................................................................................................................................... 12
Section 3.1: SELECT with DISTINCT ............................................................................................................................ 12
Section 3.2: SELECT all columns (*) ........................................................................................................................... 12
Section 3.3: SELECT by column name ....................................................................................................................... 13
Section 3.4: SELECT with LIKE (%) ............................................................................................................................. 13
Section 3.5: SELECT with CASE or IF .......................................................................................................................... 15
Section 3.6: SELECT with Alias (AS) ........................................................................................................................... 15
Section 3.7: SELECT with a LIMIT clause ................................................................................................................... 16
Section 3.8: SELECT with BETWEEN .......................................................................................................................... 16
Section 3.9: SELECT with WHERE ............................................................................................................................... 18
Section 3.10: SELECT with LIKE(_) ............................................................................................................................. 18
Section 3.11: SELECT with date range ........................................................................................................................ 19
Chapter 4: Backticks ................................................................................................................................................. 20
Section 4.1: Backticks usage ....................................................................................................................................... 20
Chapter 5: NULL .......................................................................................................................................................... 21
Section 5.1: Uses for NULL .......................................................................................................................................... 21
Section 5.2: Testing NULLs ......................................................................................................................................... 21
Chapter 6: Limit and Oset ................................................................................................................................... 22
Section 6.1: Limit and Oset relationship .................................................................................................................. 22
Chapter 7: Creating databases ........................................................................................................................... 24
Section 7.1: Create database, users, and grants ...................................................................................................... 24
Section 7.2: Creating and Selecting a Database ...................................................................................................... 26
Section 7.3: MyDatabase ............................................................................................................................................ 26
Section 7.4: System Databases .................................................................................................................................. 27
Chapter 8: Using Variables .................................................................................................................................... 28
Section 8.1: Setting Variables ..................................................................................................................................... 28
Section 8.2: Row Number and Group By using variables in Select Statement ..................................................... 29
Chapter 9: Comment MySQL ................................................................................................................................. 31
Section 9.1: Adding comments ................................................................................................................................... 31
Section 9.2: Commenting table definitions ............................................................................................................... 31
Chapter 10: INSERT .................................................................................................................................................... 32
Section 10.1: INSERT, ON DUPLICATE KEY UPDATE ................................................................................................. 32
Section 10.2: Inserting multiple rows ......................................................................................................................... 32
Section 10.3: Basic Insert ............................................................................................................................................. 33
Section 10.4: INSERT with AUTO_INCREMENT + LAST_INSERT_ID() .................................................................... 33
Section 10.5: INSERT SELECT (Inserting data from another Table) ....................................................................... 35
Section 10.6: Lost AUTO_INCREMENT ids ................................................................................................................. 35
Chapter 11: DELETE ..................................................................................................................................................... 37
Section 11.1: Multi-Table Deletes ................................................................................................................................. 37
Section 11.2: DELETE vs TRUNCATE ........................................................................................................................... 38
Section 11.3: Multi-table DELETE ................................................................................................................................. 39
Section 11.4: Basic delete ............................................................................................................................................. 39
Section 11.5: Delete with Where clause ...................................................................................................................... 39
Section 11.6: Delete all rows from a table .................................................................................................................. 39
Section 11.7: LIMITing deletes ...................................................................................................................................... 39
Chapter 12: UPDATE ................................................................................................................................................... 41
Section 12.1: Update with Join Pattern ...................................................................................................................... 41
Section 12.2: Basic Update ......................................................................................................................................... 41
Section 12.3: Bulk UPDATE .......................................................................................................................................... 42
Section 12.4: UPDATE with ORDER BY and LIMIT ..................................................................................................... 42
Section 12.5: Multiple Table UPDATE ......................................................................................................................... 42
Chapter 13: ORDER BY .............................................................................................................................................. 44
Section 13.1: Contexts ................................................................................................................................................... 44
Section 13.2: Basic ........................................................................................................................................................ 44
Section 13.3: ASCending / DESCending ..................................................................................................................... 44
Section 13.4: Some tricks ............................................................................................................................................. 44
Chapter 14: Group By ............................................................................................................................................... 46
Section 14.1: GROUP BY using HAVING ...................................................................................................................... 46
Section 14.2: Group By using Group Concat ............................................................................................................. 46
Section 14.3: Group By Using MIN function ............................................................................................................... 46
Section 14.4: GROUP BY with AGGREGATE functions .............................................................................................. 47
Chapter 15: Error 1055: ONLY_FULL_GROUP_BY: something is not in GROUP BY clause
... .......................................................................................................................................................................................... 50
Section 15.1: Misusing GROUP BY to return unpredictable results: Murphy's Law ................................................ 50
Section 15.2: Misusing GROUP BY with SELECT *, and how to fix it ........................................................................ 50
Section 15.3: ANY_VALUE() ........................................................................................................................................ 51
Section 15.4: Using and misusing GROUP BY ........................................................................................................... 51
Chapter 16: Joins ......................................................................................................................................................... 53
Section 16.1: Joins visualized ....................................................................................................................................... 53
Section 16.2: JOIN with subquery ("Derived" table) ................................................................................................. 53
Section 16.3: Full Outer Join ........................................................................................................................................ 54
Section 16.4: Retrieve customers with orders -- variations on a theme ................................................................ 55
Section 16.5: Joining Examples .................................................................................................................................. 56
Chapter 17: JOINS: Join 3 table with the same name of id. .................................................................. 57
Section 17.1: Join 3 tables on a column with the same name ................................................................................. 57
Chapter 18: UNION ...................................................................................................................................................... 58
Section 18.1: Combining SELECT statements with UNION ....................................................................................... 58
Section 18.2: Combining data with dierent columns ............................................................................................. 58
Section 18.3: ORDER BY .............................................................................................................................................. 58
Section 18.4: Pagination via OFFSET ......................................................................................................................... 58
Section 18.5: Combining and merging data on dierent MySQL tables with the same columns into unique
rows and running query ..................................................................................................................................... 59
Section 18.6: UNION ALL and UNION ......................................................................................................................... 59
Chapter 19: Arithmetic .............................................................................................................................................. 60
Section 19.1: Arithmetic Operators ............................................................................................................................. 60
Section 19.2: Mathematical Constants ...................................................................................................................... 60
Section 19.3: Trigonometry (SIN, COS) ...................................................................................................................... 60
Section 19.4: Rounding (ROUND, FLOOR, CEIL) ....................................................................................................... 62
Section 19.5: Raise a number to a power (POW) ..................................................................................................... 62
Section 19.6: Square Root (SQRT) ............................................................................................................................. 62
Section 19.7: Random Numbers (RAND) ................................................................................................................... 63
Section 19.8: Absolute Value and Sign (ABS, SIGN) ................................................................................................. 63
Chapter 20: String operations ............................................................................................................................. 64
Section 20.1: LENGTH() ............................................................................................................................................... 65
Section 20.2: CHAR_LENGTH() .................................................................................................................................. 65
Section 20.3: HEX(str) ................................................................................................................................................. 65
Section 20.4: SUBSTRING() ........................................................................................................................................ 65
Section 20.5: UPPER() / UCASE() .............................................................................................................................. 66
Section 20.6: STR_TO_DATE - Convert string to date ............................................................................................ 66
Section 20.7: LOWER() / LCASE() .............................................................................................................................. 66
Section 20.8: REPLACE() ............................................................................................................................................. 66
Section 20.9: Find element in comma separated list .............................................................................................. 66
Chapter 21: Date and Time Operations ........................................................................................................... 68
Section 21.1: Date arithmetic ....................................................................................................................................... 68
Section 21.2: SYSDATE(), NOW(), CURDATE() .......................................................................................................... 68
Section 21.3: Testing against a date range ............................................................................................................... 69
Section 21.4: Extract Date from Given Date or DateTime Expression ................................................................... 69
Section 21.5: Using an index for a date and time lookup ........................................................................................ 69
Section 21.6: Now() ...................................................................................................................................................... 70
Chapter 22: Handling Time Zones ...................................................................................................................... 71
Section 22.1: Retrieve the current date and time in a particular time zone .......................................................... 71
Section 22.2: Convert a stored `DATE` or `DATETIME` value to another time zone ............................................. 71
Section 22.3: Retrieve stored `TIMESTAMP` values in a particular time zone ....................................................... 71
Section 22.4: What is my server's local time zone setting? .................................................................................... 71
Section 22.5: What time_zone values are available in my server? ....................................................................... 72
Chapter 23: Regular Expressions ........................................................................................................................ 73
Section 23.1: REGEXP / RLIKE ..................................................................................................................................... 73
Chapter 24: VIEW ........................................................................................................................................................ 75
Section 24.1: Create a View ........................................................................................................................................ 75
Section 24.2: A view from two tables ........................................................................................................................ 76
Section 24.3: DROPPING A VIEW ............................................................................................................................... 76
Section 24.4: Updating a table via a VIEW ............................................................................................................... 76
Chapter 25: Table Creation ................................................................................................................................... 77
Section 25.1: Table creation with Primary Key ......................................................................................................... 77
Section 25.2: Basic table creation ............................................................................................................................. 78
Section 25.3: Table creation with Foreign Key ......................................................................................................... 78
Section 25.4: Show Table Structure ........................................................................................................................... 79
Section 25.5: Cloning an existing table ..................................................................................................................... 80
Section 25.6: Table Create With TimeStamp Column To Show Last Update ....................................................... 80
Section 25.7: CREATE TABLE FROM SELECT ............................................................................................................ 80
Chapter 26: ALTER TABLE ....................................................................................................................................... 82
Section 26.1: Changing storage engine; rebuild table; change file_per_table ..................................................... 82
Section 26.2: ALTER COLUMN OF TABLE ................................................................................................................. 82
Section 26.3: Change auto-increment value ............................................................................................................ 82
Section 26.4: Renaming a MySQL table .................................................................................................................... 82
Section 26.5: ALTER table add INDEX ....................................................................................................................... 83
Section 26.6: Changing the type of a primary key column .................................................................................... 83
Section 26.7: Change column definition .................................................................................................................... 83
Section 26.8: Renaming a MySQL database ............................................................................................................ 83
Section 26.9: Swapping the names of two MySQL databases ............................................................................... 84
Section 26.10: Renaming a column in a MySQL table ............................................................................................. 84
Chapter 27: Drop Table ........................................................................................................................................... 86
Section 27.1: Drop Table ............................................................................................................................................. 86
Section 27.2: Drop tables from database ................................................................................................................. 86
Chapter 28: MySQL LOCK TABLE ........................................................................................................................ 87
Section 28.1: Row Level Locking ................................................................................................................................ 87
Section 28.2: Mysql Locks ........................................................................................................................................... 88
Chapter 29: Error codes .......................................................................................................................................... 90
Section 29.1: Error code 1064: Syntax error ............................................................................................................... 90
Section 29.2: Error code 1175: Safe Update ............................................................................................................... 90
Section 29.3: Error code 1215: Cannot add foreign key constraint ......................................................................... 90
Section 29.4: 1067, 1292, 1366, 1411 - Bad Value for number, date, default, etc ...................................................... 92
Section 29.5: 1045 Access denied .............................................................................................................................. 92
Section 29.6: 1236 "impossible position" in Replication ........................................................................................... 92
Section 29.7: 2002, 2003 Cannot connect ................................................................................................................ 93
Section 29.8: 126, 127, 134, 144, 145 .............................................................................................................................. 93
Section 29.9: 139 .......................................................................................................................................................... 93
Section 29.10: 1366 ....................................................................................................................................................... 93
Section 29.11: 126, 1054, 1146, 1062, 24 ......................................................................................................................... 94
Chapter 30: Stored routines (procedures and functions) ..................................................................... 96
Section 30.1: Stored procedure with IN, OUT, INOUT parameters ......................................................................... 96
Section 30.2: Create a Function ................................................................................................................................. 97
Section 30.3: Cursors ................................................................................................................................................... 98
Section 30.4: Multiple ResultSets ............................................................................................................................... 99
Section 30.5: Create a function .................................................................................................................................. 99
Chapter 31: Indexes and Keys ............................................................................................................................. 101
Section 31.1: Create index .......................................................................................................................................... 101
Section 31.2: Create unique index ............................................................................................................................ 101
Section 31.3: AUTO_INCREMENT key ...................................................................................................................... 101
Section 31.4: Create composite index ...................................................................................................................... 101
Section 31.5: Drop index ............................................................................................................................................ 102
Chapter 32: Full-Text search ............................................................................................................................... 103
Section 32.1: Simple FULLTEXT search .................................................................................................................... 103
Section 32.2: Simple BOOLEAN search ................................................................................................................... 103
Section 32.3: Multi-column FULLTEXT search ........................................................................................................ 103
Chapter 33: PREPARE Statements ................................................................................................................... 105
Section 33.1: PREPARE, EXECUTE and DEALLOCATE PREPARE Statements ...................................................... 105
Section 33.2: Alter table with add column .............................................................................................................. 105
Chapter 34: JSON ..................................................................................................................................................... 106
Section 34.1: Create simple table with a primary key and JSON field ................................................................. 106
Section 34.2: Insert a simple JSON .......................................................................................................................... 106
Section 34.3: Updating a JSON field ....................................................................................................................... 106
Section 34.4: Insert mixed data into a JSON field ................................................................................................. 107
Section 34.5: CAST data to JSON type ................................................................................................................... 107
Section 34.6: Create Json Object and Array .......................................................................................................... 107
Chapter 35: Extract values from JSON type .............................................................................................. 108
Section 35.1: Read JSON Array value ..................................................................................................................... 108
Section 35.2: JSON Extract Operators .................................................................................................................... 108
Chapter 36: MySQL Admin .................................................................................................................................... 110
Section 36.1: Atomic RENAME & Table Reload ....................................................................................................... 110
Section 36.2: Change root password ...................................................................................................................... 110
Section 36.3: Drop database .................................................................................................................................... 110
Chapter 37: TRIGGERS ........................................................................................................................................... 111
Section 37.1: Basic Trigger ........................................................................................................................................ 111
Section 37.2: Types of triggers ................................................................................................................................ 111
Chapter 38: Configuration and tuning ........................................................................................................... 113
Section 38.1: InnoDB performance .......................................................................................................................... 113
Section 38.2: Parameter to allow huge data to insert ........................................................................................... 113
Section 38.3: Increase the string limit for group_concat ...................................................................................... 113
Section 38.4: Minimal InnoDB configuration .......................................................................................................... 113
Section 38.5: Secure MySQL encryption ................................................................................................................. 114
Chapter 39: Events ................................................................................................................................................... 115
Section 39.1: Create an Event ................................................................................................................................... 115
Chapter 40: ENUM ................................................................................................................................................... 118
Section 40.1: Why ENUM? ......................................................................................................................................... 118
Section 40.2: VARCHAR as an alternative .............................................................................................................. 118
Section 40.3: Adding a new option .......................................................................................................................... 118
Section 40.4: NULL vs NOT NULL ............................................................................................................................ 118
Chapter 41: Install Mysql container with Docker-Compose ............................................................... 120
Section 41.1: Simple example with docker-compose ............................................................................................. 120
Chapter 42: Character Sets and Collations ................................................................................................ 121
Section 42.1: Which CHARACTER SET and COLLATION? ...................................................................................... 121
Section 42.2: Setting character sets on tables and fields ..................................................................................... 121
Section 42.3: Declaration .......................................................................................................................................... 121
Section 42.4: Connection .......................................................................................................................................... 122
Chapter 43: MyISAM Engine ................................................................................................................................ 123
Section 43.1: ENGINE=MyISAM .................................................................................................................................. 123
Chapter 44: Converting from MyISAM to InnoDB ................................................................................... 124
Section 44.1: Basic conversion ................................................................................................................................. 124
Section 44.2: Converting All Tables in one Database ........................................................................................... 124
Chapter 45: Transaction ...................................................................................................................................... 125
Section 45.1: Start Transaction ................................................................................................................................. 125
Section 45.2: COMMIT , ROLLBACK and AUTOCOMMIT ....................................................................................... 126
Section 45.3: Transaction using JDBC Driver ......................................................................................................... 128
Chapter 46: Log files .............................................................................................................................................. 131
Section 46.1: Slow Query Log ................................................................................................................................... 131
Section 46.2: A List .................................................................................................................................................... 131
Section 46.3: General Query Log ............................................................................................................................. 132
Section 46.4: Error Log ............................................................................................................................................. 133
Chapter 47: Clustering ........................................................................................................................................... 135
Section 47.1: Disambiguation ................................................................................................................................... 135
Chapter 48: Partitioning ....................................................................................................................................... 136
Section 48.1: RANGE Partitioning ............................................................................................................................. 136
Section 48.2: LIST Partitioning ................................................................................................................................. 136
Section 48.3: HASH Partitioning ............................................................................................................................... 137
Chapter 49: Replication ........................................................................................................................................ 138
Section 49.1: Master - Slave Replication Setup ....................................................................................................... 138
Section 49.2: Replication Errors ............................................................................................................................... 140
Chapter 50: Backup using mysqldump ......................................................................................................... 142
Section 50.1: Specifying username and password ................................................................................................ 142
Section 50.2: Creating a backup of a database or table ...................................................................................... 142
Section 50.3: Restoring a backup of a database or table .................................................................................... 143
Section 50.4: Tranferring data from one MySQL server to another ................................................................... 143
Section 50.5: mysqldump from a remote server with compression .................................................................... 144
Section 50.6: restore a gzipped mysqldump file without uncompressing .......................................................... 144
Section 50.7: Backup database with stored procedures and functions .............................................................. 144
Section 50.8: Backup direct to Amazon S3 with compression ............................................................................. 144
Chapter 51: mysqlimport ...................................................................................................................................... 145
Section 51.1: Basic usage ........................................................................................................................................... 145
Section 51.2: Using a custom field-delimiter ........................................................................................................... 145
Section 51.3: Using a custom row-delimiter ............................................................................................................ 145
Section 51.4: Handling duplicate keys ..................................................................................................................... 145
Section 51.5: Conditional import .............................................................................................................................. 146
Section 51.6: Import a standard csv ........................................................................................................................ 146
Chapter 52: LOAD DATA INFILE ......................................................................................................................... 147
Section 52.1: using LOAD DATA INFILE to load large amount of data to database .......................................... 147
Section 52.2: Load data with duplicates ................................................................................................................. 148
Section 52.3: Import a CSV file into a MySQL table ............................................................................................... 148
Chapter 53: MySQL Unions .................................................................................................................................. 149
Section 53.1: Union operator .................................................................................................................................... 149
Section 53.2: Union ALL ............................................................................................................................................ 149
Section 53.3: UNION ALL With WHERE ................................................................................................................... 150
Chapter 54: MySQL client .................................................................................................................................... 151
Section 54.1: Base login ............................................................................................................................................. 151
Section 54.2: Execute commands ............................................................................................................................ 151
Chapter 55: Temporary Tables ......................................................................................................................... 153
Section 55.1: Create Temporary Table .................................................................................................................... 153
Section 55.2: Drop Temporary Table ...................................................................................................................... 153
Chapter 56: Customize PS1 ................................................................................................................................... 154
Section 56.1: Customize the MySQL PS1 with current database ........................................................................... 154
Section 56.2: Custom PS1 via MySQL configuration file ........................................................................................ 154
Chapter 57: Dealing with sparse or missing data ................................................................................... 155
Section 57.1: Working with columns containg NULL values .................................................................................. 155
Chapter 58: Connecting with UTF-8 Using Various Programming language. ........................... 158
Section 58.1: Python .................................................................................................................................................. 158
Section 58.2: PHP ...................................................................................................................................................... 158
Chapter 59: Time with subsecond precision ............................................................................................... 159
Section 59.1: Get the current time with millisecond precision ............................................................................... 159
Section 59.2: Get the current time in a form that looks like a Javascript timestamp ....................................... 159
Section 59.3: Create a table with columns to store sub-second time ................................................................. 159
Section 59.4: Convert a millisecond-precision date / time value to text ............................................................. 159
Section 59.5: Store a Javascript timestamp into a TIMESTAMP column ............................................................ 160
Chapter 60: One to Many ..................................................................................................................................... 161
Section 60.1: Example Company Tables ................................................................................................................. 161
Section 60.2: Get the Employees Managed by a Single Manager ....................................................................... 161
Section 60.3: Get the Manager for a Single Employee ......................................................................................... 161
Chapter 61: Server Information ......................................................................................................................... 163
Section 61.1: SHOW VARIABLES example ................................................................................................................ 163
Section 61.2: SHOW STATUS example .................................................................................................................... 163
Chapter 62: SSL Connection Setup .................................................................................................................. 165
Section 62.1: Setup for Debian-based systems ...................................................................................................... 165
Section 62.2: Setup for CentOS7 / RHEL7 .............................................................................................................. 167
Chapter 63: Create New User ............................................................................................................................. 171
Section 63.1: Create a MySQL User ......................................................................................................................... 171
Section 63.2: Specify the password ......................................................................................................................... 171
Section 63.3: Create new user and grant all priviliges to schema ....................................................................... 171
Section 63.4: Renaming user .................................................................................................................................... 171
Chapter 64: Security via GRANTs .................................................................................................................... 172
Section 64.1: Best Practice ........................................................................................................................................ 172
Section 64.2: Host (of user@host) ........................................................................................................................... 172
Chapter 65: Change Password ........................................................................................................................... 173
Section 65.1: Change MySQL root password in Linux ............................................................................................ 173
Section 65.2: Change MySQL root password in Windows .................................................................................... 173
Section 65.3: Process ................................................................................................................................................ 174
Chapter 66: Recover and reset the default root password for MySQL 5.7+ ............................. 175
Section 66.1: What happens when the initial start up of the server ..................................................................... 175
Section 66.2: How to change the root password by using the default password .............................................. 175
Section 66.3: reset root password when " /var/run/mysqld' for UNIX socket file don't exists" ....................... 175
Chapter 67: Recover from lost root password ......................................................................................... 178
Section 67.1: Set root password, enable root user for socket and http access .................................................. 178
Chapter 68: MySQL Performance Tips .......................................................................................................... 179
Section 68.1: Building a composite index ................................................................................................................ 179
Section 68.2: Optimizing Storage Layout for InnoDB Tables ............................................................................... 179
Chapter 69: Performance Tuning ..................................................................................................................... 181
Section 69.1: Don't hide in function .......................................................................................................................... 181
Section 69.2: OR ........................................................................................................................................................ 181
Section 69.3: Add the correct index ......................................................................................................................... 181
Section 69.4: Have an INDEX ................................................................................................................................... 182
Section 69.5: Subqueries ........................................................................................................................................... 182
Section 69.6: JOIN + GROUP BY .............................................................................................................................. 182
Section 69.7: Set the cache correctly ...................................................................................................................... 183
Section 69.8: Negatives ............................................................................................................................................ 183
Appendix A: Reserved Words ............................................................................................................................. 184
Section A.1: Errors due to reserved words .............................................................................................................. 184
Credits ............................................................................................................................................................................ 185
You may also like ...................................................................................................................................................... 188
About
Please feel free to share this PDF with anyone for free,
latest version of this book can be downloaded from:
http://GoalKicker.com/MySQLBook
This MySQL™ Notes for Professionals book is compiled from Stack Overflow
Documentation, the content is written by the beautiful people at Stack Overflow.
Text content is released under Creative Commons BY-SA, see credits at the end
of this book whom contributed to the various chapters. Images may be copyright
of their respective owners unless otherwise specified
This is an unofficial free book created for educational purposes and is not
affiliated with official MySQL™ group(s) or company(s) nor Stack Overflow. All
trademarks and registered trademarks are the property of their respective
company owners
Return value:
USE mydb;
Return value:
Database Changed
id int unsigned NOT NULL auto_increment creates the id column, this type of field will assign a unique numeric
ID to each record in the table (meaning that no two rows can have the same id in this case), MySQL will
automatically assign a new, unique value to the record's id field (starting with 1).
The varchar a.k.a strings can be also be inserted using single quotes:
The int value can be inserted in a query without quotes. Strings and Dates must be enclosed in single quote ' or
double quotes ".
Return value:
+----+----------+---------------------+
| id | username | email |
+----+----------+---------------------+
| 1 | myuser | [email protected] |
SHOW databases;
Return value:
+-------------------+
| Databases |
+-------------------+
| information_schema|
| mydb |
+-------------------+
You can think of "information_schema" as a "master database" that provides access to database metadata.
SHOW tables;
Return value:
+----------------+
| Tables_in_mydb |
+----------------+
| mytable |
+----------------+
DESCRIBE databaseName.tableName;
DESCRIBE tableName;
Return value:
+-----------+----------------+--------+---------+-------------------+-------+
| Field | Type | Null | Key | Default | Extra |
+-----------+----------------+--------+---------+-------------------+-------+
Key refers to the type of key that may affect the field. Primary (PRI), Unique (UNI) ...
Creating user
First, you need to create a user and then give the user permissions on certain databases/tables. While creating the
user, you also need to specify where this user can connect from.
Will create a user that can only connect on the local machine where the database is hosted.
Will create a user that can connect from anywhere (except the local machine).
Adding privileges
Grant common, basic privileges to the user for all tables of the specified database:
Grant all privileges to the user for all tables on all databases (attention with this):
As demonstrated above, *.* targets all databases and tables, databaseName.* targets all tables of the specific
database. It is also possible to specify database and table like so databaseName.tableName.
WITH GRANT OPTION should be left out if the user need not be able to grant other users privileges.
ALL
SELECT
INSERT
UPDATE
Note
Generally, you should try to avoid using column or table names containing spaces or using reserved words in SQL.
For example, it's best to avoid names like table or first name.
If you must use such names, put them between back-tick `` delimiters. For example:
SELECT `first name` FROM `table` WHERE `first name` LIKE 'a%';
Easily search thru all Stored Procedures for words and wildcards.
Most use cases for CHAR(n) involve strings that contain English characters, hence should be CHARACTER SET ascii.
(latin1 will do just as good.)
The DATETIME type includes the time with a format of 'YYYY-MM-DD HH:MM:SS'. It has a range from '1000-01-01
00:00:00' to '9999-12-31 23:59:59'.
The TIMESTAMP type is an integer type comprising date and time with an effective range from '1970-01-01 00:00:01'
UTC to '2038-01-19 03:14:07' UTC.
The YEAR type represents a year and holds a range from 1901 to 2155.
The TIME type represents a time with a format of 'HH:MM:SS' and holds a range from '-838:59:59' to '838:59:59'.
Storage Requirements:
|-----------|--------------------|----------------------------------------|
| Data Type | Before MySQL 5.6.4 | as of MySQL 5.6.4 |
|-----------|--------------------|----------------------------------------|
| YEAR | 1 byte | 1 byte |
| DATE | 3 bytes | 3 bytes |
| TIME | 3 bytes | 3 bytes + fractional seconds storage |
| DATETIME | 8 bytes | 5 bytes + fractional seconds storage |
| TIMESTAMP | 4 bytes | 4 bytes + fractional seconds storage |
|-----------|--------------------|----------------------------------------|
|------------------------------|------------------|
| Fractional Seconds Precision | Storage Required |
|------------------------------|------------------|
| 0 | 0 bytes |
| 1,2 | 1 byte |
| 3,4 | 2 byte |
| 5,6 | 3 byte |
|------------------------------|------------------|
See the MySQL Manual Pages DATE, DATETIME, and TIMESTAMP Types, Data Type Storage Requirements, and
Fractional Seconds in Time Values.
First, I will mention some common strings that are always hex, or otherwise limited to ASCII. For these, you should
specify CHARACTER SET ascii (latin1 is ok) so that it will not waste space:
Why not simply 255? There are two reasons to avoid the common practice of using (255) for everything.
When a complex SELECT needs to create temporary table (for a subquery, UNION, GROUP BY, etc), the
preferred choice is to use the MEMORY engine, which puts the data in RAM. But VARCHARs are turned into CHAR
in the process. This makes VARCHAR(255) CHARACTER SET utf8mb4 take 1020 bytes. That can lead to needing
to spill to disk, which is slower.
In certain situations, InnoDB will look at the potential size of the columns in a table and decide that it will be
too big, aborting a CREATE TABLE.
Usage hints for *TEXT, CHAR, and VARCHAR, plus some Best Practice:
Keep in mind that certain operations "burn" AUTO_INCREMENT ids. This could lead to an unexpected gap. Examples:
INSERT IGNORE and REPLACE. They may preallocate an id before realizing that it won't be needed. This is expected
behavior and by design in the InnoDB engine and should not discourage their use.
INTs
FLOAT, DOUBLE, and DECIMAL
Where appropriate, each topic page should include, in addition to syntax and examples:
(I assume this "example" will self-distruct when my suggestions have been satisfied or vetoed.)
To make the multiplication with 2 MySQL automatically converts the string 123 into a number.
Return value:
246
The conversion to a number starts from left to right. If the conversion is not possible the result is 0
select '123ABC' * 2
Return value:
246
select 'ABC123' * 2
Return value:
Group Types
Integer Types INTEGER, INT, SMALLINT, TINYINT, MEDIUMINT, BIGINT
Decimal
These values are stored in binary format. In a column declaration, the precision and scale should be specified
Precision represents the number of significant digits that are stored for values.
salary DECIMAL(5,2)
5 represents the precision and 2 represents the scale. For this example, the range of values that can be stored in
this column is -999.99 to 999.99
Although MySQL also permits (M,D) qualifier, do not use it. (M,D) means that values can be stored with up to M total
Because floating-point values are approximate and not stored as exact values, attempts to treat them as exact in
comparisons may lead to problems. Note in particular that a FLOAT value rarely equals a DOUBLE value.
b'111' -> 7
b'10000000' -> 128
Sometimes it is handy to use 'shift' to construct a single-bit value, for example (1 << 7) for 128.
The maximum combined size of all BIT columns in an NDB table is 4096.
INSERT INTO CAR (`car_id`, `name`, `price`) VALUES (1, 'Audi A1', '20000');
INSERT INTO CAR (`car_id`, `name`, `price`) VALUES (2, 'Audi A1', '15000');
INSERT INTO CAR (`car_id`, `name`, `price`) VALUES (3, 'Audi A2', '40000');
INSERT INTO CAR (`car_id`, `name`, `price`) VALUES (4, 'Audi A2', '40000');
+---------+----------+
| name | price |
+---------+----------+
| Audi A1 | 20000.00 |
| Audi A1 | 15000.00 |
| Audi A2 | 40000.00 |
+---------+----------+
DISTINCT works across all columns to deliver the results, not individual columns. The latter is often a misconception
of new SQL developers. In short, it is the distinctness at the row-level of the result set that matters, not distinctness
at the column-level. To visualize this, look at "Audi A1" in the above result set.
For later versions of MySQL, DISTINCT has implications with its use alongside ORDER BY. The setting for
ONLY_FULL_GROUP_BY comes into play as seen in the following MySQL Manual Page entitled MySQL Handling of
GROUP BY.
Result
+------+----------+----------+
| id | username | password |
+------+----------+----------+
| 1 | admin | admin |
| 2 | stack | stack |
+------+----------+----------+
2 rows in set (0.00 sec)
You can select all columns from one table in a join by doing:
Best Practice Do not use * unless you are debugging or fetching the row(s) into associative arrays, otherwise
schema changes (ADD/DROP/rearrange columns) can lead to nasty application errors. Also, if you give the list of
columns you need in your result set, MySQL's query planner often can optimize the query.
Pros:
1. When you add/remove columns, you don't have to make changes where you did use SELECT *
2. It's shorter to write
3. You also see the answers, so can SELECT *-usage ever be justified?
Cons:
1. You are returning more data than you need. Say you add a VARBINARY column that contains 200k per row.
You only need this data in one place for a single record - using SELECT * you can end up returning 2MB per
10 rows that you don't need
2. Explicit about what data is used
3. Specifying columns means you get an error when a column is removed
4. The query processor has to do some more work - figuring out what columns exist on the table (thanks
@vinodadhikary)
5. You can find where a column is used more easily
6. You get all columns in joins if you use SELECT *
7. You can't safely use ordinal referencing (though using ordinal references for columns is bad practice in itself)
8. In complex queries with TEXT fields, the query may be slowed down by less-optimal temp table processing
INSERT INTO stack (`id`, `username`, `password`) VALUES (1, 'Foo', 'hiddenGem');
INSERT INTO stack (`id`, `username`, `password`) VALUES (2, 'Baa', 'verySecret');
Query
Result
+------+
| id |
+------+
| 1 |
| 2 |
+------+
"adm" anywhere:
+----+-----------+
| id | username |
+----+-----------+
| 1 | admin |
| 2 | k admin |
| 3 | adm |
| 4 | a adm b |
| 5 | b XadmY c |
| 6 | adm now |
+----+-----------+
+----+----------+
| id | username |
+----+----------+
| 1 | admin |
| 3 | adm |
| 6 | adm now |
+----+----------+
+----+----------+
| id | username |
+----+----------+
| 3 | adm |
+----+----------+
Just as the % character in a LIKE clause matches any number of characters, the _ character matches just one
character. For example,
+----+----------+
| id | username |
+----+----------+
| 1 | admin |
+----+----------+
SELECT st.name,
st.percentage,
CASE WHEN st.percentage >= 35 THEN 'Pass' ELSE 'Fail' END AS `Remark`
FROM student AS st ;
Result
+--------------------------------+
| name | percentage | Remark |
+--------------------------------+
| Isha | 67 | Pass |
| Rucha | 28 | Fail |
| Het | 35 | Pass |
| Ansh | 92 | Pass |
+--------------------------------+
Or with IF
SELECT st.name,
st.percentage,
IF(st.percentage >= 35, 'Pass', 'Fail') AS `Remark`
FROM student AS st ;
N.B
This means : IF st.percentage >= 35 is TRUE then return 'Pass' ELSE return 'Fail'
Query
+-------+
| val |
+-------+
| admin |
| stack |
+-------+
2 rows in set (0.00 sec)
SELECT *
FROM Customers
ORDER BY CustomerID
LIMIT 3;
Result:
Best Practice Always use ORDER BY when using LIMIT; otherwise the rows you will get will be unpredictable.
Query:
SELECT *
FROM Customers
ORDER BY CustomerID
LIMIT 2,1;
Explanation:
When a LIMIT clause contains two numbers, it is interpreted as LIMIT offset,count. So, in this example the query
skips two records and returns one.
Result:
Note:
The values in LIMIT clauses must be constants; they may not be column values.
Data
Result
+----+-----------+
| id | username |
+----+-----------+
| 2 | root |
| 3 | toor |
| 4 | mysql |
| 5 | thanks |
+----+-----------+
4 rows in set (0.00 sec)
Note
If you want to use the negative you can use NOT. For example :
Result
+----+-----------+
| id | username |
+----+-----------+
| 1 | admin |
| 6 | java |
+----+-----------+
2 rows in set (0.00 sec)
Note
If you have an index on a column you use in a BETWEEN search, MySQL can use that index for a range scan.
Result
+------+----------+----------+
| id | username | password |
+------+----------+----------+
| 1 | admin | admin |
+------+----------+----------+
1 row in set (0.00 sec)
The WHERE clause can contain any valid SELECT statement to write more complex queries. This is a 'nested' query
Query
Nested queries are usually used to return single atomic values from queries for comparisons.
SELECT title FROM books WHERE author_id = (SELECT id FROM authors WHERE last_name = 'Bar' AND
first_name = 'Foo');
SELECT * FROM stack WHERE username IN (SELECT username FROM signups WHERE email IS NULL);
Disclaimer: Consider using joins for performance improvements when comparing a whole result set.
Query
Result
+----------+
| username |
+----------+
| admin1 |
| admin2 |
| admin- |
| adminA |
Sure, this could be done with BETWEEN and inclusion of 23:59:59. But, the pattern has this benefits:
You don't have pre-calculate the end date (which is often an exact length from the start)
You don't include both endpoints (as BETWEEN does), nor type '23:59:59' to avoid it.
It works for DATE, TIMESTAMP, DATETIME, and even the microsecond-included DATETIME(6).
It takes care of leap days, end of year, etc.
It is index-friendly (so is BETWEEN).
Backticks are mainly used to prevent an error called "MySQL reserved word". When making a table in PHPmyAdmin
you are sometimes faced with a warning or alert that you are using a "MySQL reserved word".
For example when you create a table with a column named "group" you get a warning. This is because you can
make the following query:
To make sure you don't get an error in your query you have to use backticks so your query becomes:
Table
Not only column names can be surrounded by backticks, but also table names. For example when you need to JOIN
multiple tables.
Easier to read
As you can see using backticks around table and column names also make the query easier to read.
For example when you are used to write querys all in lower case:
Please see the MySQL Manual page entitled Keywords and Reserved Words. The ones with an (R) are Reserved
Words. The others are merely Keywords. The Reserved require special caution.
In a LEFT JOIN tests for rows of a for which there is not a corresponding row in b.
SELECT ...
FROM a
LEFT JOIN b ON ...
WHERE b.id IS NULL
id username
1 User1
2 User2
3 User3
4 User4
5 User5
In order to constrain the number of rows in the result set of a SELECT query, the LIMIT clause can be used together
with one or two positive integers as arguments (zero included).
When one argument is used, the result set will only be constrained to the number specified in the following
manner:
Also notice that the ORDER BY clause may be important in order to specify the first rows of the result set that will be
presented (when ordering by another column).
the first argument represents the row from which the result set rows will be presented – this number is
often mentioned as an offset, since it represents the row previous to the initial row of the constrained result
set. This allows the argument to receive 0 as value and thus taking into consideration the first row of the non-
constrained result set.
the second argument specifies the maximum number of rows to be returned in the result set (similarly to
the one argument's example).
id username
3 User3
4 User4
5 User5
Notice that when the offset argument is 0, the result set will be equivalent to a one argument LIMIT clause. This
id username
1 User1
2 User2
OFFSET keyword: alternative syntax
An alternative syntax for the LIMIT clause with two arguments consists in the usage of the OFFSET keyword after the
first argument in the following manner:
id username
3 User3
4 User4
Notice that in this alternative syntax the arguments have their positions switched:
the first argument represents the number of rows to be returned in the result set;
If the database already exists, Error 1007 is returned. To get around this error, try:
Similarly,
DROP DATABASE IF EXISTS Baseball; -- Drops a database if it exists, avoids Error 1008
DROP DATABASE xyz; -- If xyz does not exist, ERROR 1008 will occur
Due to the above Error possibilities, DDL statements are often used with IF EXISTS.
One can create a database with a default CHARACTER SET and collation. For example:
+----------+-------------------------------------------------------------------+
| Database | Create Database |
+----------+-------------------------------------------------------------------+
| Baseball | CREATE DATABASE `Baseball` /*!40100 DEFAULT CHARACTER SET utf8 */ |
+----------+-------------------------------------------------------------------+
SHOW DATABASES;
+---------------------+
| Database |
+---------------------+
| information_schema |
| ajax_stuff |
| Baseball |
+---------------------+
+------+-----------------+
| cset | col |
+------+-----------------+
| utf8 | utf8_general_ci |
+------+-----------------+
The above shows the default CHARACTER SET and Collation for the database.
Create a user:
The above creates a user John123, able to connect with any hostname due to the % wildcard. The Password for the
user is set to 'OpenSesame' which is hashed.
Show that the users have been created by examining the special mysql database:
+---------+------+-------------------------------------------+
| user | host | password |
+---------+------+-------------------------------------------+
| John123 | % | *E6531C342ED87 .................... |
| John456 | % | *B04E11FAAAE9A .................... |
+---------+------+-------------------------------------------+
Note that at this point, the users have been created, but without any permissions to use the Baseball database.
Work with permissions for users and databases. Grant rights to user John123 to have full privileges on the Baseball
database, and just SELECT rights for the other user:
+-------------------------------------------------------------------------------------------------
-------+
| Grants for John123@%
|
+-------------------------------------------------------------------------------------------------
-------+
| GRANT USAGE ON *.* TO 'John123'@'%' IDENTIFIED BY PASSWORD '*E6531C342ED87 ....................
|
| GRANT ALL PRIVILEGES ON `baseball`.* TO 'John123'@'%'
|
+-------------------------------------------------------------------------------------------------
-------+
+-------------------------------------------------------------------------------------------------
-------+
| Grants for John456@%
|
+-------------------------------------------------------------------------------------------------
-------+
| GRANT USAGE ON *.* TO 'John456'@'%' IDENTIFIED BY PASSWORD '*B04E11FAAAE9A ....................
|
| GRANT SELECT ON `baseball`.* TO 'John456'@'%'
|
+-------------------------------------------------------------------------------------------------
-------+
Note that the GRANT USAGE that you will always see means simply that the user may login. That is all that that
means.
Under Unix, database names are case sensitive (unlike SQL keywords), so you must always refer to your database
as menagerie, not as Menagerie, MENAGERIE, or some other variant. This is also true for table names. (Under
Windows, this restriction does not apply, although you must refer to databases and tables using the same
lettercase throughout a given query. However, for a variety of reasons, the recommended best practice is always to
use the same lettercase that was used when the database was created.)
Creating a database does not select it for use; you must do that explicitly. To make menagerie the current database,
use this statement:
Your database needs to be created only once, but you must select it for use each time you begin a mysql session.
You can do this by issuing a USE statement as shown in the example. Alternatively, you can select the database on
the command line when you invoke mysql. Just specify its name after any connection parameters that you might
need to provide. For example:
You can reference your table by qualifying with the database name: my_db.some_table.
1. You can set a variable to a specific, string, number, date using SET
3. You can set a variable to be the result of a select statement using INTO
(This was particularly helpful when I needed to dynamically choose which Partitions to query from)
#this gets the year month value to use as the partition names
SET @start_yearmonth = (SELECT EXTRACT(YEAR_MONTH FROM @start_date));
SET @end_yearmonth = (SELECT EXTRACT(YEAR_MONTH FROM @end_date));
#put the query in a variable. You need to do this, because mysql did not recognize my variable as a
variable in that position. You need to concat the value of the variable together with the rest of the
query and then execute it as a stmt.
SET @query =
CONCAT('CREATE TABLE part_of_partitioned_table (PRIMARY KEY(id))
SELECT partitioned_table.*
FROM partitioned_table PARTITION(', @partitions,')
JOIN users u USING(user_id)
WHERE date(partitioned_table.date) BETWEEN ', @start_date,' AND ', @end_date);
+======+===========+
| team | person |
+======+===========+
| A | John |
+------+-----------+
| B | Smith |
+------+-----------+
| A | Walter |
+------+-----------+
| A | Louis |
+------+-----------+
| C | Elizabeth |
+------+-----------+
| B | Wayne |
+------+-----------+
OR
SET @row_no := 0;
SELECT @row_no := @row_no + 1 AS row_number, team, person
FROM team_person;
+============+======+===========+
| row_number | team | person |
+============+======+===========+
| 1 | A | John |
+------------+------+-----------+
| 2 | B | Smith |
+------------+------+-----------+
| 3 | A | Walter |
+------------+------+-----------+
| 4 | A | Louis |
+------------+------+-----------+
| 5 | C | Elizabeth |
+------------+------+-----------+
| 6 | B | Wayne |
+------------+------+-----------+
+============+======+===========+
| row_number | team | person |
+============+======+===========+
| 1 | A | Walter |
+------------+------+-----------+
| 2 | A | Louis |
+------------+------+-----------+
| 3 | A | John |
+------------+------+-----------+
| 1 | B | Wayne |
+------------+------+-----------+
| 2 | B | Smith |
+------------+------+-----------+
| 1 | C | Elizabeth |
+------------+------+-----------+
/*
This is a
multiple-line comment
*/
Example:
The -- method requires that a space follows the -- before the comment begins, otherwise it will be interpreted as a
command and usually cause an error.
These comments, unlike the others, are saved with the schema and can be retrieved via SHOW CREATE TABLE or
from information_schema.
This will INSERT into table_name the specified values, but if the unique key already exists, it will update the
other_field_1 to have a new value.
Sometimes, when updating on duplicate key it comes in handy to use VALUES() in order to access the original value
that was passed to the INSERT instead of setting the value directly. This way, you can set different values by using
INSERT and UPDATE. See the example above where other_field_1 is set to insert_value on INSERT or to
update_value on UPDATE while other_field_2 is always set to other_value.
Crucial for the Insert on Duplicate Key Update (IODKU) to work is the schema containing a unique key that will
signal a duplicate clash. This unique key can be a Primary Key or not. It can be a unique key on a single column, or a
multi-column (composite key).
This is an easy way to add several rows at once with one INSERT statement.
This kind of 'batch' insert is much faster than inserting rows one by one. Typically, inserting 100 rows in a single
batch insert this way is 10 times as fast as inserting them all individually.
When importing large datasets, it may be preferable under certain circumstances to skip rows that would usually
cause the query to fail due to a column restraint e.g. duplicate primary keys. This can be done using INSERT IGNORE.
The important thing to remember is that INSERT IGNORE will also silently skip other errors too, here is what Mysql
official documentations says:
Data conversions that would trigger errors abort the statement if IGNORE is not > specified. With IGNORE,
invalid values are adjusted to the closest values and >inserted; warnings are produced but the statement
does not abort.
Note: The section below is added for the sake of completeness, but is not considered best practice (this
would fail, for example, if another column was added into the table).
If you specify the value of the corresponding column for all columns in the table, you can ignore the column list in
the INSERT statement as follows:
In this trivial example, table_name is where the data are to be added, field_one and field_two are fields to set
data against, and value_one and value_two are the data to do against field_one and field_two respectively.
It's good practice to list the fields you are inserting data into within your code, as if the table changes and new
columns are added, your insert would break should they not be there
CREATE TABLE t (
id SMALLINT UNSIGNED AUTO_INCREMENT NOT NULL,
this ...,
that ...,
PRIMARY KEY(id) );
Your client API probably has an alternative way of getting the LAST_INSERT_ID() without actually performing a
SELECT and handing the value back to the client instead of leaving it in an @variable inside MySQL. Such is usually
preferable.
The "normal" usage of IODKU is to trigger "duplicate key" based on some UNIQUE key, not the AUTO_INCREMENT
PRIMARY KEY. The following demonstrates such. Note that it does not supply the id in the INSERT.
The case of IODKU performing an "update" and LAST_INSERT_ID() retrieving the relevant id:
+------------------+
| LAST_INSERT_ID() |
+------------------+
| 2 |
+------------------+
The case where IODKU performs an "insert" and LAST_INSERT_ID() retrieves the new id:
+------------------+
| LAST_INSERT_ID() |
+------------------+
| 3 |
+------------------+
+----+--------+------+
| id | name | misc |
+----+--------+------+
| 1 | Leslie | 123 |
| 2 | Sally | 3333 | -- IODKU changed this
| 3 | Dana | 789 | -- IODKU added this
+----+--------+------+
You can SELECT * FROM, but then tableA and tableB must have matching column count and corresponding
datatypes.
Columns with AUTO_INCREMENT are treated as in the INSERT with VALUES clause.
This syntax makes it easy to fill (temporary) tables with data from other tables, even more so when the data is to be
filtered on the insert.
INSERT IGNORE INTO Burn (name) VALUES ('second'); -- dup 'IGNOREd', but id=3 is burned
SELECT LAST_INSERT_ID(); -- Still "1" -- can't trust in this situation
SELECT * FROM Burn ORDER BY id;
+----+--------+
| 1 | first |
| 2 | second |
+----+--------+
Think of it (roughly) this way: First the insert looks to see how many rows might be inserted. Then grab that many
values from the auto_increment for that table. Finally, insert the rows, using ids as needed, and burning any left
overs.
The only time the leftover are recoverable is if the system is shutdown and restarted. On restart, effectively MAX(id)
is performed. This may reuse ids that were burned or that were freed up by DELETEs of the highest id(s).
Essentially any flavor of INSERT (including REPLACE, which is DELETE + INSERT) can burn ids. In InnoDB, the global
(not session!) variable innodb_autoinc_lock_mode can be used to control some of what is going on.
When "normalizing" long strings into an AUTO INCREMENT id, burning can easily happen. This could lead to
overflowing the size of the INT you chose.
DELETE p2
FROM pets p2
WHERE p2.ownerId in (
SELECT p1.id
FROM people p1
WHERE p1.name = 'Paul');
1 row deleted
Spot is deleted from Pets
p1 and p2 are aliases for the table names, especially useful for long table names and ease of readability.
2 rows deleted
Spot is deleted from Pets
Paul is deleted from People
foreign keys
When the DELETE statement involes tables with a foreing key constrain the optimizer may process the tables in an
order that does not follow the relationship. Adding for example a foreign key to the definition of pets
ALTER TABLE pets ADD CONSTRAINT `fk_pets_2_people` FOREIGN KEY (ownerId) references people(id) ON
DELETE CASCADE;
the engine may try to delete the entries from people before pets, thus causing the following error:
ERROR 1451 (23000): Cannot delete or update a parent row: a foreign key constraint fails
(`test`.`pets`, CONSTRAINT `pets_ibfk_1` FOREIGN KEY (`ownerId`) REFERENCES `people` (`id`))
The solution in this case is to delete the row from people and rely on InnoDB's ON DELETE capabilities to propagate
the deletion:
2 rows deleted
Paul is deleted from People
Spot is deleted on cascade from Pets
SET foreign_key_checks = 0;
DELETE p1, p2 FROM people p1 JOIN pets p2 ON p2.ownerId = p1.id WHERE p1.name = 'Paul';
SET foreign_key_checks = 1;
When you truncate a table SQL server doesn't delete the data, it drops the table and recreates it, thereby
deallocating the pages so there is a chance to recover the truncated data before the pages where overwritten. (The
space cannot immediately be recouped for innodb_file_per_table=OFF.)
The WHERE clause is optional but without it all rows are deleted.
This will delete all rows from the table where the contents of the field_one for that row match 'value_one'
The WHERE clause works in the same way as a select, so things like >, <, <> or LIKE can be used.
Notice: It is necessary to use conditional clauses (WHERE, LIKE) in delete query. If you do not use any conditional
clauses then all data from that table will be deleted.
This will delete everything, all rows from the table. It is the most basic example of the syntax. It also shows that
DELETE statements should really be used with extra care as they may empty a table, if the WHERE clause is omitted.
This works in the same way as the 'Delete with Where clause' example, but it will stop the deletion once the limited
If you are limiting rows for deletion like this, be aware that it will delete the first row which matches the criteria. It
might not be the one you would expect, as the results can come back unsorted if they are not explicitly ordered.
Update our production data using a join to our imported worktable data.
Aliases q and i are used to abbreviate the table references. This eases development and readability.
qId, the Primary Key, represents the Stackoverflow question id. Four columns are updated for matching rows from
the join.
This query updates the content of email in the customers table to the string [email protected] where the
value of id is equal to 1. The old and new contents of the database table are illustrated below on the left and right
respectively:
This query update the content of lastname for every entry in the customers table. The old and new contents of the
database table are illustrated below on the left and right respectively:
Notice: It is necessary to use conditional clauses (WHERE) in UPDATE query. If you do not use any conditional
clause then all records of that table's attribute will be updated. In above example new value (Smith) of lastname in
UPDATE people
SET name =
(CASE id WHEN 1 THEN 'Karl'
WHEN 2 THEN 'Tom'
WHEN 3 THEN 'Mary'
END)
WHERE id IN (1,2,3);
By bulk updating only one query can be sent to the server instead of one query for each row to update. The cases
should contain all possible parameters looked up in the WHERE clause.
If LIMIT clause is specified in your SQL statement, that places a limit on the number of rows that can be updated.
There is no limit, if LIMIT clause not specified.
Syntax for the MySQL UPDATE with ORDER BY and LIMIT is,
---> Example
UPDATE employees SET isConfirmed=1 ORDER BY joiningDate LIMIT 10
In the above example, 10 rows will be updated according to the order of employees joiningDate.
For example consider two tables, products and salesOrders. In case, we decrease the quantity of a particular
product from the sales order which is placed already. Then we also need to increase that quantity in our stock
column of products table. This can be done in single SQL update statement like below.
In the above example, quantity '5' will be reduced from the salesOrders table and the same will be increased in
products table according to the WHERE conditions.
SELECT ... FROM ... WHERE ... GROUP BY ... HAVING ...
ORDER BY ... -- goes here
LIMIT ... OFFSET ...;
( SELECT ... ) UNION ( SELECT ... ) ORDER BY ... -- for ordering the result of the UNION.
ALTER TABLE ... ORDER BY ... -- probably useful only for MyISAM; not for InnoDB
But... Mixing ASC and DESC, as in the last example, cannot use a composite index to benefit. Nor will
INDEX(submit_date DESC, id ASC) help -- "DESC" is recognized syntactically in the INDEX declaration, but ignored.
Custom ordering
Useful if the ids are already sorted and you just need to retrieve the rows.
Using GROUP BY ... HAVING to filter aggregate records is analogous to using SELECT ... WHERE to filter individual
records.
You could also say HAVING Man_Power >= 10 since HAVING understands "aliases".
Name Score
Adam A+
Adam A-
Adam B
Adam C+
Bill D-
John A-
SELECT Name, GROUP_CONCAT(Score ORDER BY Score desc SEPERATOR ' ') AS Grades
FROM Grade
GROUP BY Name
Results:
+------+------------+
| Name | Grades |
+------+------------+
| Adam | C+ B A- A+ |
| Bill | D- |
| John | A- |
+------+------------+
This would tell you which department contains the employee with the lowest salary, and what that salary is. Finding
the name of the employee with the lowest salary in each department is a different problem, beyond the scope of this
Example. See "groupwise max".
+---------+------------+----------+-------+--------+
| orderid | customerid | customer | total | items |
+---------+------------+----------+-------+--------+
| 1 | 1 | Bob | 1300 | 10 |
| 2 | 3 | Fred | 500 | 2 |
| 3 | 5 | Tess | 2500 | 8 |
| 4 | 1 | Bob | 300 | 6 |
| 5 | 2 | Carly | 800 | 3 |
| 6 | 2 | Carly | 1000 | 12 |
| 7 | 3 | Fred | 100 | 1 |
| 8 | 5 | Tess | 11500 | 50 |
| 9 | 4 | Jenny | 200 | 2 |
| 10 | 1 | Bob | 500 | 15 |
+---------+------------+----------+-------+--------+
COUNT
Return the number of rows that satisfy a specific criteria in WHERE clause.
Result:
+----------+--------+
| customer | orders |
+----------+--------+
| Bob | 3 |
| Carly | 2 |
| Fred | 2 |
| Jenny | 1 |
| Tess | 2 |
+----------+--------+
SUM
Result:
+----------+-----------+-----------+
| customer | sum_total | sum_items |
+----------+-----------+-----------+
| Bob | 2100 | 31 |
| Carly | 1800 | 15 |
| Fred | 600 | 3 |
| Jenny | 200 | 2 |
| Tess | 14000 | 58 |
+----------+-----------+-----------+
AVG
Result:
+----------+-----------+
| customer | avg_total |
+----------+-----------+
| Bob | 700 |
| Carly | 900 |
| Fred | 300 |
| Jenny | 200 |
| Tess | 7000 |
+----------+-----------+
MAX
Result:
+----------+-----------+
| customer | max_total |
+----------+-----------+
| Bob | 1300 |
| Carly | 1000 |
MIN
Result:
+----------+-----------+
| customer | min_total |
+----------+-----------+
| Bob | 300 |
| Carly | 800 |
| Fred | 100 |
| Jenny | 200 |
| Tess | 2500 |
+----------+-----------+
will show the rows in a table called item, and show the count of related rows in a table called uses. It will also show
the value of a column called uses.category.
This query works in MySQL (before the ONLY_FULL_GROUP_BY flag appeared). It uses MySQL's nonstandard extension
to GROUP BY.
But the query has a problem: if several rows in the uses table match the ON condition in the JOIN clause, MySQL
returns the category column from just one of those rows. Which row? The writer of the query, and the user of the
application, doesn't get to know that in advance. Formally speaking, it's unpredictable: MySQL can return any value it
wants.
Unpredictable is like random, with one significant difference. One might expect a random choice to change from time
to time. Therefore, if a choice were random, you might detect it during debugging or testing. The unpredictable
result is worse: MySQL returns the same result each time you use the query, until it doesn't. Sometimes it's a new
version of the MySQL server that causes a different result. Sometimes it's a growing table causing the problem.
What can go wrong, will go wrong, and when you don't expect it. That's called Murphy's Law.
The MySQL team has been working to make it harder for developers to make this mistake. Newer versions of
MySQL in the 5.7 sequence have a sql_mode flag called ONLY_FULL_GROUP_BY. When that flag is set, the MySQL
server returns the 1055 error and refuses to run this kind of query.
To do this, we need a subquery that uses GROUP BY correctly to return the number_of_uses value for each item_id.
This allows the GROUP BY clause to be simple and correct, and also allows us to use the * specifier.
Note: nevertheless, wise developers avoid using the * specifier in any case. It's usually better to list the columns you
want in a query.
shows the rows in a table called item, the count of related rows, and one of the values in the related table called
uses.
You can think of this ANY_VALUE() function as a strange a kind of aggregate function. Instead of returning a count,
sum, or maximum, it instructs the MySQL server to choose, arbitrarily, one value from the group in question. It's a
way of working around Error 1055.
It really should be called SURPRISE_ME(). It returns the value of some row in the GROUP BY group. Which row it
returns is indeterminate. That means it's entirely up to the MySQL server. Formally, it returns an unpredictable
value.
The server doesn't choose a random value, it's worse than that. It returns the same value every time you run the
query, until it doesn't. It can change, or not, when a table grows or shrinks, or when the server has more or less
RAM, or when the server version changes, or when Mars is in retrograde (whatever that means), or for no reason at
all.
will show the rows in a table called item, and show the count of related rows in a table called uses. This works well,
but unfortunately it's not standard SQL-92.
Why not? because the SELECT clause (and the ORDER BY clause) in GROUP BY queries must contain columns that are
This example's SELECT clause mentions item.name, a column that does not meet either of those criteria. MySQL 5.6
and earlier will reject this query if the SQL mode contains ONLY_FULL_GROUP_BY.
This example query can be made to comply with the SQL-92 standard by changing the GROUP BY clause, like this.
The later SQL-99 standard allows a SELECT statement to omit unaggregated columns from the group key if the
DBMS can prove a functional dependence between them and the group key columns. Because item.name is
functionally dependent on item.item_id, the initial example is valid SQL-99. MySQL gained a functional
dependence prover in version 5.7. The original example works under ONLY_FULL_GROUP_BY.
This will evaluate the subquery into a temp table, then JOIN that to tbl.
Prior to 5.6, there could not be an index on the temp table. So, this was potentially very inefficient:
SELECT ...
FROM ( SELECT y, ... FROM ... ) AS a
JOIN ( SELECT x, ... FROM ... ) AS b ON b.x = a.y
WHERE ...
With 5.6, the optimizer figures out the best index and creates it on the fly. (This has some overhead, so it is still not
'perfect'.)
SELECT
@n := @n + 1,
...
FROM ( SELECT @n := 0 ) AS initialize
JOIN the_real_table
ORDER BY ...
(Note: this is technically a CROSS JOIN (Cartesian product), as indicated by the lack of ON. However it is efficient
because the subquery returns only one row that has to be matched to the n rows in the_real_table.)
-- ----------------------------
-- Table structure for `owners`
-- ----------------------------
DROP TABLE IF EXISTS `owners`;
CREATE TABLE `owners` (
`owner_id` int(11) NOT NULL AUTO_INCREMENT,
`owner` varchar(30) DEFAULT NULL,
PRIMARY KEY (`owner_id`)
) ENGINE=InnoDB AUTO_INCREMENT=10 DEFAULT CHARSET=latin1;
-- ----------------------------
-- Records of owners
-- ----------------------------
INSERT INTO `owners` VALUES ('1', 'Ben');
INSERT INTO `owners` VALUES ('2', 'Jim');
INSERT INTO `owners` VALUES ('3', 'Harry');
INSERT INTO `owners` VALUES ('6', 'John');
INSERT INTO `owners` VALUES ('9', 'Ellie');
-- ----------------------------
-- Table structure for `tools`
-- ----------------------------
DROP TABLE IF EXISTS `tools`;
CREATE TABLE `tools` (
`tool_id` int(11) NOT NULL AUTO_INCREMENT,
`tool` varchar(30) DEFAULT NULL,
`owner_id` int(11) DEFAULT NULL,
PRIMARY KEY (`tool_id`)
) ENGINE=InnoDB AUTO_INCREMENT=11 DEFAULT CHARSET=latin1;
-- ----------------------------
-- Records of tools
-- ----------------------------
INSERT INTO `tools` VALUES ('1', 'Hammer', '9');
INSERT INTO `tools` VALUES ('2', 'Pliers', '1');
INSERT INTO `tools` VALUES ('3', 'Knife', '1');
INSERT INTO `tools` VALUES ('4', 'Chisel', '2');
INSERT INTO `tools` VALUES ('5', 'Hacksaw', '1');
INSERT INTO `tools` VALUES ('6', 'Level', null);
INSERT INTO `tools` VALUES ('7', 'Wrench', null);
INSERT INTO `tools` VALUES ('8', 'Tape Measure', '9');
INSERT INTO `tools` VALUES ('9', 'Screwdriver', null);
INSERT INTO `tools` VALUES ('10', 'Clamp', null);
We want to get a list, in which we see who owns which tools, and which tools might not have an owner.
The queries
To accomplish this, we can combine two queries by using UNION. In this first query we are joining the tools on the
owners by using a LEFT JOIN. This will add all of our owners to our resultset, doesn't matter if they actually own
tools.
In the second query we are using a RIGHT JOIN to join the tools onto the owners. This way we manage to get all the
tools in our resultset, if they are owned by no one their owner column will simply contain NULL. By adding a WHERE-
clause which is filtering by owners.owner_id IS NULL we are defining the result as those datasets, which have not
already been returned by the first query, as we are only looking for the data in the right joined table.
Since we are using UNION ALL the resultset of the second query will be attached to the first queries resultset.
+-------+--------------+
| owner | tool |
+-------+--------------+
| Ben | Pliers |
| Ben | Knife |
| Ben | Hacksaw |
| Jim | Chisel |
| Harry | NULL |
| John | NULL |
| Ellie | Hammer |
| Ellie | Tape Measure |
| NULL | Level |
| NULL | Wrench |
| NULL | Screwdriver |
| NULL | Clamp |
+-------+--------------+
12 rows in set (0.00 sec)
SELECT c.CustomerName,
( SELECT COUNT(*) FROM Orders WHERE CustomerID = c.CustomerID ) AS 'Order Count'
FROM Customers AS c
ORDER BY c.CustomerName;
SELECT c.CustomerName,
FROM Customers AS c
WHERE EXISTS ( SELECT * FROM Orders WHERE CustomerID = c.CustomerID )
ORDER BY c.CustomerName;
Since we’re using InnoDB tables and know that user.course and course.id are related, we can specify a foreign key
relationship:
after creating the tables you could do a select query to get the id's of all three tables that are the same
SELECT
t1.id AS table1Id,
t2.id AS table2Id,
t3.id AS table3Id
FROM Table1 t1
LEFT JOIN Table2 t2 ON t2.id = t1.id
LEFT JOIN Table3 t3 ON t3.id = t1.id
For example, if you wanted a list of all contact info from two separate tables, authors and editors, for instance,
you could use the UNION keyword like so:
union
Using union by itself will strip out duplicates. If you needed to keep duplicates in your query, you could use the ALL
keyword like so: UNION ALL.
When combining 2 record sets with different columns then emulate the missing ones with default values.
( SELECT ... )
UNION
( SELECT ... )
ORDER BY
Without the parentheses, the final ORDER BY would belong to the last SELECT.
Since you cannot predict which SELECT(s) will the "10" will come from, you need to get 10 from each, then further
whittle down the list, repeating both the ORDER BY and LIMIT.
That is, collect 4 page's worth in each SELECT, then do the OFFSET in the UNION.
when
If the numbers in your arithmetic are all integers, MySQL uses the BIGINT (signed 64-bit) integer data type to do its
work. For example:
and
select (1024 * 1024 * 1024 * 1024 *1024 * 1024 * 1024 -> BIGINT out of range error
DOUBLE
If any numbers in your arithmetic are fractional, MySQL uses 64-bit IEEE 754 floating point arithmetic. You must be
careful when using floating point arithmetic, because many floating point numbers are, inherently, approximations
rather than exact values.
The following returns the value of PI formatted to 6 decimal places. The actual value is good to DOUBLE;
If you use DECIMAL values in trigonometric computations, they are implicitly converted to floating point, and then
back to decimal.
Cosine
Tangent
Returns the tangent of a number X expressed in radians. Notice the result is very close to zero, but not exactly zero.
This is an example of machine ε.
ATAN2(X, Y) returns the arc tangent of the two variables X and Y. It is similar to calculating the arc tangent of Y / X.
But it is numerically more robust: t functions correctly when X is near zero, and the signs of both arguments are
used to determine the quadrant of the result.
Best practice suggests writing formulas to use ATAN2() rather than ATAN() wherever possible.
Cotangent
Conversion
SELECT RADIANS(90) -> 1.5707963267948966
SELECT SIN(RADIANS(90)) -> 1
SELECT DEGREES(1), DEGREES(PI()) -> 57.29577951308232, 180
For exact numeric values (e.g. DECIMAL): If the first decimal place of a number is 5 or higher, this function will round
a number to the next integer away from zero. If that decimal place is 4 or lower, this function will round to the next
integer value closest to zero.
For approximate numeric values (e.g. DOUBLE): The result of the ROUND() function depends on the C library; on many
systems, this means that ROUND() uses the round to the nearest even rule:
Round up a number
To generate a pseudorandom floating point number between 0 and 1, use the RAND() function
i RAND()
1 0.6191438870682
2 0.93845168309142
3 0.83482678498591
Random Number in a range
To generate a random number in the range a <= n <= b, you can use the following formula
The pseudorandom number generator in MySQL is not cryptographically secure. That is, if you use MySQL to
generate random numbers to be used as secrets, a determined adversary who knows you used MySQL will be able
to guess your secrets more easily than you might believe.
Syntax: LENGTH(str)
LENGTH('foobar') -- 6
LENGTH('fööbar') -- 8 -- contrast with CHAR_LENGTH(...) = 6
Syntax: CHAR_LENGTH(str)
CHAR_LENGTH('foobar') -- 6
CHAR_LENGTH('fööbar') -- 6 -- contrast with LENGTH(...) = 8
Syntax: UPPER(str)
UPPER('fOoBar') -- 'FOOBAR'
UCASE('fOoBar') -- 'FOOBAR'
Syntax: LOWER(str)
LOWER('fOoBar') -- 'foobar'
LCASE('fOoBar') -- 'foobar'
Return value:
SELECT FIND_IN_SET('d','a,b,c');
Return value:
Show the mysql questions stored that were asked 3 to 10 hours ago (180 to 600 minutes ago):
SELECT qId,askDate,minuteDiff
FROM
( SELECT qId,askDate,
TIMESTAMPDIFF(MINUTE,askDate,now()) as minuteDiff
FROM questions_mysql
) xDerived
WHERE minuteDiff BETWEEN 180 AND 600
ORDER BY qId DESC
LIMIT 50;
+----------+---------------------+------------+
| qId | askDate | minuteDiff |
+----------+---------------------+------------+
| 38546828 | 2016-07-23 22:06:50 | 182 |
| 38546733 | 2016-07-23 21:53:26 | 195 |
| 38546707 | 2016-07-23 21:48:46 | 200 |
| 38546687 | 2016-07-23 21:45:26 | 203 |
| ... | | |
+----------+---------------------+------------+
Beware Do not try to use expressions like CURDATE() + 1 for date arithmetic in MySQL. They don't return what you
expect, especially if you're accustomed to the Oracle database product. Use CURDATE() + INTERVAL 1 DAY instead.
This function returns the current date and time as a value in 'YYYY-MM-DD HH:MM:SS' or YYYYMMDDHHMMSS format,
depending on whether the function is used in a string or numeric context. It returns the date and time in the
current time zone.
SELECT NOW();
SELECT CURDATE();
This function returns the current date, without any time, as a value in 'YYYY-MM-DD' or YYYYMMDD format, depending
on whether the function is used in a string or numeric context. It returns the date in the current time zone.
Advantages:
2003-12-31
It's inefficient because it applies a function -- DATE() -- to the values of a column. That means MySQL must examine
each value of x, and an index cannot be used.
This selects a range of values of x lying anywhere on the day in question, up until but not including (hence <)
midnight on the next day.
If the table has an index on the x column, then the database server can perform a range scan on the index. That
means it can quickly find the first relevant value of x, and then scan the index sequentially until it finds the last
relevant value. An index range scan is much more efficient than the full table scan required by DATE(x) =
'2016-09-01.
This will update the field mydatefield with current server date and time in server's configured timezone, e.g.
'2016-07-21 12:00:00'
SELECT NOW();
SET time_zone='Asia/Kolkata';
SELECT NOW();
SET time_zone='UTC';
SELECT NOW();
Why is this? TIMESTAMP values are based on the venerable UNIX time_t data type. Those UNIX timestamps are
stored as a number of seconds since 1970-01-01 00:00:00 UTC.
Notice TIMESTAMP values are stored in universal time. DATE and DATETIME values are stored in whatever local time
was in effect when they were stored.
SELECT @@time_zone
Unfortunately, that usually yields the value SYSTEM, meaning the MySQL time is governed by the server OS's time
zone setting.
This sequence of queries (yes, it's a hack) gives you back the offset in minutes between the server's time zone
How does this work? The two columns in the temporary table with different data types is the clue. DATETIME data
types are always stored in local time in tables, and TIMESTAMPs in UTC. So the INSERT statement, performed when
the time_zone is set to UTC, stores two identical date / time values.
Then, the SELECT statement, is done when the time_zone is set to server local time. TIMESTAMPs are always
translated from their stored UTC form to local time in SELECT statements. DATETIMEs are not. So the
TIMESTAMPDIFF(MINUTE...) operation computes the difference between local and universal time.
SELECT mysql.time_zone_name.name
Ordinarily, this shows the ZoneInfo list of time zones maintained by Paul Eggert at the Internet Assigned Numbers
Authority. Worldwide there are appproximately 600 time zones.
Unix-like operating systems (Linux distributions, BSD distributions, and modern Mac OS distributions, for example)
receive routine updates. Installing these updates on an operating system lets the MySQL instances running there
track the changes in time zone and daylight / standard time changeovers.
If you get a much shorter list of time zone names, your server is either incompletely configured or running on
Windows. Here are instructions for your server administrator to install and maintain the ZoneInfo list.
+-------------+-------------+-------------+--------------+----------+
| EMPLOYEE_ID | FIRST_NAME | LAST_NAME | PHONE_NUMBER | SALARY |
+-------------+-------------+-------------+--------------+----------+
| 100 | Steven | King | 515.123.4567 | 24000.00 |
| 101 | Neena | Kochhar | 515.123.4568 | 17000.00 |
| 102 | Lex | De Haan | 515.123.4569 | 17000.00 |
| 103 | Alexander | Hunold | 590.423.4567 | 9000.00 |
| 104 | Bruce | Ernst | 590.423.4568 | 6000.00 |
| 105 | David | Austin | 590.423.4569 | 4800.00 |
| 106 | Valli | Pataballa | 590.423.4560 | 4800.00 |
| 107 | Diana | Lorentz | 590.423.5567 | 4200.00 |
| 108 | Nancy | Greenberg | 515.124.4569 | 12000.00 |
| 109 | Daniel | Faviet | 515.124.4169 | 9000.00 |
| 110 | John | Chen | 515.124.4269 | 8200.00 |
+-------------+-------------+-------------+--------------+----------+
Pattern ^
Query
Pattern $**
Query
NOT REGEXP
Query
Regex Contain
Select all employees whose LAST_NAME contains in and whose FIRST_NAME contains a.
SELECT * FROM employees WHERE FIRST_NAME REGEXP 'a' AND LAST_NAME REGEXP 'in'
-- No ^ or $, pattern can be anywhere -------------------------------------^
Query
Pattern or |
Select all employees whose FIRST_NAME starts with A or B or C and ends with r, e, or i.
Query
FIRST_NAME REGEXP '^N' is 1 or 0 depending on the fact that FIRST_NAME matches ^N.
To visualize it better:
SELECT
FIRST_NAME,
IF(FIRST_NAME REGEXP '^N', 'matches ^N', 'does not match ^N') as matching
FROM employees
SELECT
IF(FIRST_NAME REGEXP '^N', 'matches ^N', 'does not match ^N') as matching,
COUNT(*)
FROM employees
GROUP BY matching
The CREATE VIEW statement requires the CREATE VIEW privilege for the view, and some privilege for each column
selected by the SELECT statement. For columns used elsewhere in the SELECT statement, you must have the SELECT
privilege. If the OR REPLACE clause is present, you must also have the DROP privilege for the view. CREATE VIEW
might also require the SUPER privilege, depending on the DEFINER value, as described later in this section.
A view belongs to a database. By default, a new view is created in the default database. To create the view explicitly
in a given database, use a fully qualified name
For Example:
db_name.view_name
Note - Within a database, base tables and views share the same namespace, so a base table and a view cannot have the
same name.
A VIEW can:
Another Example
The following example defines a view that selects two columns from another table as well as an expression
calculated from those columns:
+------+-------+-------+
| qty | price | value |
+------+-------+-------+
| 3 | 50 | 150 |
+------+-------+-------+
Restrictions
In mysql views are not materialized. If you now perform the simple query SELECT * FROM myview, mysql will
actually perform the LEFT JOIN behind the scene.
Things like GROUP BY, UNION, HAVING, DISTINCT, and some subqueries prevent the view from being updatable.
Details in reference manual.
A primary key is a NOT NULL single or a multi-column identifier which uniquely identifies a row of a table. An index
is created, and if not explicitly declared as NOT NULL, MySQL will declare them so silently and implicitly.
A table can have only one PRIMARY KEY, and each table is recommended to have one. InnoDB will automatically
create one in its absence, (as seen in MySQL documentation) though this is less desirable.
Often, an AUTO_INCREMENT INT also known as "surrogate key", is used for thin index optimization and relations with
other tables. This value will (normally) increase by 1 whenever a new record is added, starting from a default value
of 1.
However, despite its name, it is not its purpose to guarantee that values are incremental, merely that they are
sequential and unique.
An auto-increment INT value will not reset to its default start value if all rows in the table are deleted, unless the
table is truncated using TRUNCATE TABLE statement.
If the primary key consists of a single column, the PRIMARY KEY clause can be placed inline with the column
definition:
It is also possible to define a primary key comprising more than one column. This might be done e.g. on the child
table of a foreign-key relationship. A multi-column primary key is defined by listing the participating columns in a
separate PRIMARY KEY clause. Inline syntax is not permitted here, as only one column may be declared PRIMARY KEY
inline. For example:
Note that the columns of the primary key should be specified in logical sort order, which may be different from the
order in which the columns were defined, as in the example above.
Larger indexes require more disk space, memory, and I/O. Therefore keys should be as small as possible (especially
regarding composed keys). In InnoDB, every 'secondary index' includes a copy of the columns of the PRIMARY KEY.
1. Field name: A valid field Name. Make sure to encolse the names in `-chars. This ensures that you can use eg
space-chars in the fieldname.
2. Data type [Length]: If the field is CHAR or VARCHAR, it is mandatory to specify a field length.
3. Attributes NULL | NOT NULL: If NOT NULL is specified, then any attempt to store a NULL value in that field will
fail.
4. See more on data types and their attributes here.
Engine=... is an optional parameter used to specify the table's storage engine. If no storage engine is specified, the
table will be created using the server's default table storage engine (usually InnoDB or MyISAM).
Setting defaults
Additionally, where it makes sense you can set a default value for each field by using DEFAULT:
If during inserts no Street is specified, that field will be NULL when retrieved. When no Country is specified upon
insert, it will default to "United States".
You can set default values for all column types, except for BLOB, TEXT, GEOMETRY, and JSON fields.
Foreign key: A Foreign Key (FK) is either a single column, or multi-column composite of columns, in a referencing
table. This FK is confirmed to exist in the referenced table. It is highly recommended that the referenced table key
confirming the FK be a Primary Key, but that is not enforced. It is used as a fast-lookup into the referenced where it
does not need to be unique, and in fact can be a left-most index there.
Foreign key relationships involve a parent table that holds the central data values, and a child table with identical
values pointing back to its parent. The FOREIGN KEY clause is specified in the child table. The parent and child
tables must use the same storage engine. They must not be TEMPORARY tables.
Corresponding columns in the foreign key and the referenced key must have similar data types. The size and sign
of integer types must be the same. The length of string types need not be the same. For nonbinary (character)
string columns, the character set and collation must be the same.
Note: foreign-key constraints are supported under the InnoDB storage engine (not MyISAM or MEMORY). DB set-
ups using other engines will accept this CREATE TABLE statement but will not respect foreign-key constraints.
(Although newer MySQL versions default to InnoDB, but it is good practice to be explicit.)
+-------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------+-------------+------+-----+---------+-------+
| id | int(11) | YES | | NULL | |
| name | varchar(30) | YES | | NULL | |
+-------+-------------+------+-----+---------+-------+
To see DESCRIBE performed on all tables in a database at once, see this Example.
The new table will have exactly the same structure as the original table, including indexes and column attributes.
As well as manually creating a table, it is also possible to create table by selecting data from another table:
You can use any of the normal features of a SELECT statement to modify the data as you go:
Primary keys and indexes will not be preserved when creating tables from SELECT. You must redeclare them:
-- create a table from another table in the same database with some attributes
CREATE TABLE stack3 AS SELECT username, password FROM stack;
-- create a table from another table from another database with all attributes
CREATE TABLE stack2 AS SELECT * FROM second_db.stack;
-- create a table from another table from another database with some attributes
CREATE TABLE stack3 AS SELECT username, password FROM second_db.stack;
N.B
To create a table same of another table that exist in another database, you need to specifies the name of the
database like this:
FROM NAME_DATABASE.name_table
If the table is already InnoDB, this will rebuild the table and its indexes and have an effect similar to OPTIMIZE
TABLE. You may gain some disk space improvement.
If the value of innodb_file_per_table is currently different than the value in effect when t1 was built, this will
convert to (or from) file_per_table.
USE stackoverflow;
ALTER TABLE stack ADD COLUMN submit date NOT NULL; -- add new column
ALTER TABLE stack DROP COLUMN submit; -- drop column
ALTER TABLE stack MODIFY submit DATETIME NOT NULL; -- modify type column
ALTER TABLE stack CHANGE submit submit_date DATETIME NOT NULL; -- change type and name of column
ALTER TABLE stack ADD COLUMN mod_id INT NOT NULL AFTER id_user; -- add new column after existing
column
For example, you got a lot of unwanted (advertisement) rows posted in your table, you deleted them, and you want
to fix the gap in auto-increment values. Assume the MAX value of AUTO_INCREMENT column is 100 now. You can
use the following to fix the auto-increment value.
Steps:
1. Replace <old name> and <new name> in the line above with the relevant values. Note: If the table is being
moved to a different database, the dbname.tablename syntax can be used for <old name> and/or <new name>.
2. Execute it on the relevant database in the MySQL command line or a client such as MySQL Workbench. Note:
The user must have ALTER and DROP privileges on the old table and CREATE and INSERT on the new one.
An attempt to modify the type of this column without first dropping the primary key would result in an error.
users (
firstname varchar(20),
lastname varchar(20),
age char(2)
)
To change the type of age column from char to int, we use the query below:
ALTER TABLE users CHANGE age age tinyint UNSIGNED NOT NULL;
Steps:
Alternative Steps:
Rename (move) each table from one db to the other. Do this for each table:
Warning. Do not attempt to do any sort of table or database by simply moving files around on the filesystem. This
worked fine in the old days of just MyISAM, but in the new days of InnoDB and tablespaces, it won't work. Especially
when the "Data Dictionary" is moved from the filesystem into system InnoDB tables, probably in the next major
release. Moving (as opposed to just DROPping) a PARTITION of an InnoDB table requires using "transportable
tablespaces". In the near future, there won't even be a file to reach for.
Steps:
ALTER TABLE `<table name>` CHANGE `<old name>` `<new name>` <column definition>;
Creating Table:
Creating a table named tbl and then deleting the created table
Dropping Table:
PLEASE NOTE
Dropping table will completely delete the table from the database and all its information, and it will not
be recovered.
If two transactions trying to modify the same row and both uses row level locking, one of the transactions waits for
the other to complete.
Row level locking also can be obtained by using SELECT ... FOR UPDATE statement for each rows expected to be
modified.
Connection 1
START TRANSACTION;
SELECT ledgerAmount FROM accDetails WHERE id = 1 FOR UPDATE;
In connection 1, row level lock obtained by SELECT ... FOR UPDATE statement.
Connection 2
When some one try to update same row in connection 2, that will wait for connection 1 to finish transaction or error
message will be displayed according to the innodb_lock_wait_timeout setting, which defaults to 50 seconds.
Error Code: 1205. Lock wait timeout exceeded; try restarting transaction
To view details about this lock, run SHOW ENGINE INNODB STATUS
Connection 2
1 row(s) affected
But while updating some other row in connection 2 will be executed without any error.
Connection 1
Connection 2
1 row(s) affected
The update is executed without any error in Connection 2 after Connection 1 released row lock by finishing the
transaction.
MySQL enables client sessions to acquire table locks explicitly for the purpose of cooperating with other sessions
for access to tables, or to prevent other sessions from modifying tables during periods when a session requires
exclusive access to them. A session can acquire or release locks only for itself. One session cannot acquire locks for
another session or release locks held by another session.
Locks may be used to emulate transactions or to get more speed when updating tables. This is explained in more
detail later in this section.
UNLOCK TABLES;
EXAMPLE:
Above example any external connection cannot write any data to products table until unlocking table product
Above example any external connection cannot read any data from products table until unlocking table product
Returns message:
Error Code: 1064. You have an error in your SQL syntax; check the manual that corresponds to your
MySQL server version for the right syntax to use near 'from Person' at line 2.
Getting a "1064 error" message from MySQL means the query cannot be parsed without syntax errors. In other
words it can't make sense of the query.
The quotation in the error message begins with the first character of the query that MySQL can't figure out how to
parse. In this example MySQL can't make sense, in context, of from Person. In this case, there's an extra comma
immediately before from Person. The comma tells MySQL to expect another column description in the SELECT
clause
A syntax error always says ... near '...'. The thing at the beginning of the quotes is very near where the error is.
To locate an error, look at the first token in the quotes and at the last token before the quotes.
Sometimes you will get ... near ''; that is, nothing in the quotes. That means the first character MySQL can't
figure out is right at the end or the beginning of the statement. This suggests the query contains unbalanced quotes
(' or ") or unbalanced parentheses or that you did not terminate the statement before correctly.
In the case of a Stored Routine, you may have forgotten to properly use DELIMITER.
So, when you get Error 1064, look at the text of the query, and find the point mentioned in the error message.
Visually inspect the text of the query right around that point.
If you ask somebody to help you troubleshoot Error 1064, it's best to provide both the text of the whole query and
the text of the error message.
SET SQL_SAFE_UPDATES = 0;
SET SQL_SAFE_UPDATES = 1;
Note1: a KEY like this will be created automatically if needed due to the FK definition in the line that follows it. The
developer can skip it, and the KEY (a.k.a. index) will be added if necessary. An example of it being skipped by the
developer is shown below in someOther.
In this case it fails due to the lack of an index in the referenced table getTogethers to handle the speedy lookup of
an eventDT. To be solved in next statement.
Table getTogethers has been modified, and now the creation of someOther will succeed.
MySQL requires indexes on foreign keys and referenced keys so that foreign key checks can be fast and
not require a table scan. In the referencing table, there must be an index where the foreign key columns
are listed as the first columns in the same order. Such an index is created on the referencing table
automatically if it does not exist.
Corresponding columns in the foreign key and the referenced key must have similar data types. The size
and sign of integer types must be the same. The length of string types need not be the same. For
nonbinary (character) string columns, the character set and collation must be the same.
Note that last point above about first (left-most) columns and the lack of a Primary Key requirement (though highly
advised).
Upon successful creation of a referencing (child) table, any keys that were automatically created for you are visible
with a command such as the following:
Other common cases of experiencing this error include, as mentioned above from the docs, but should be
highlighted:
Seemingly trivial differences in INT which is signed, pointing toward INT UNSIGNED.
Developers having trouble understanding multi-column (composite) KEYS and first (left-most) ordering
requirements.
Section 29.4: 1067, 1292, 1366, 1411 - Bad Value for number,
date, default, etc
1067 This is probably related to TIMESTAMP defaults, which have changed over time. See TIMESTAMP defaults in the
Dates & Times page. (which does not exist yet)
1292/1366 DOUBLE/Integer Check for letters or other syntax errors. Check that the columns align; perhaps you
think you are putting into a VARCHAR but it is aligned with a numeric column.
1292 DATETIME Check for too far in past or future. Check for between 2am and 3am on a morning when Daylight
savings changed. Check for bad syntax, such as +00 timezone stuff.
1292 VARIABLE Check the allowed values for the VARIABLE you are trying to SET.
1292 LOAD DATA Look at the line that is 'bad'. Check the escape symbols, etc. Look at the datatypes.
The cause: The Master sends replication items to the Slave before flushing to its binlog (when sync_binlog=OFF). If
the Master crashes before the flush, the Slave has already logically moved past the end of file on the binlog. When
the Master starts up again, it starts a new binlog, so CHANGEing to the beginning of that binlog is the best available
solution.
A longer term solution is sync_binlog=ON, if you can afford the extra I/O that it causes.
MySQL bug, virus attack, server crash, improper shutdown, damaged table are the reason behind this corruption.
When it gets corrupted, it becomes inaccessible and you cannot access them anymore. In order to get accessibility,
the best way to retrieve data from an updated backup. However, if you do not have updated or any valid backup
then you can go for MySQL Repair.
If the table engine type is MyISAM, apply CHECK TABLE, then REPAIR TABLE to it.
Then think seriously about converting to InnoDB, so this error won't happen again.
Syntax
CHECK TABLE <table name> ////To check the extent of database corruption
REPAIR TABLE <table name> ////To repair table
PARTITIONed table(s) with a large number of partitions and innodb_file_per_table = ON. Recommend not
having more than 50 partitions in a given table (for various reasons). (When "Native Partitions" become
available, this advice may change.)
The obvious workaround is to set increase the OS limit: To allow more files, change ulimit or
/etc/security/limits.conf or in sysctl.conf (kern.maxfiles & kern.maxfilesperproc) or something else (OS
dependent). Then increase open_files_limit and table_open_cache.
As of 5.6.8, open_files_limit is auto-sized based on max_connections, but it is OK to change it from the default.
1. Duplicate Value - Error Code: 1062. Duplicate entry ‘12’ for key ‘PRIMARY’
The primary key column is unique and it will not accept the duplicate entry. So when you are trying to insert a
new row which is already present in you table will produce this error.
To solve this, Set the primary key column as AUTO_INCREMENT. And when you are trying to insert a new
row, ignore the primary key column or insert NULL value to primary key.
2. Unique data field - Error Code: 1062. Duplicate entry ‘A’ for key ‘code’
You may assigned a column as unique and trying to insert a new row with already existing value for that
column will produce this error.
SET @z = 30;
call sp_nested_loop(10, 20, @x, @y, @z);
SELECT @x, @y, @z;
Result:
+------+------+------+
| @x | @y | @z |
+------+------+------+
| 10 | 200 | 240 |
+------+------+------+
An IN parameter passes a value into a procedure. The procedure might modify the value, but the modification is
not visible to the caller when the procedure returns.
An OUT parameter passes a value from the procedure back to the caller. Its initial value is NULL within the
procedure, and its value is visible to the caller when the procedure returns.
An INOUT parameter is initialized by the caller, can be modified by the procedure, and any change made by the
procedure is visible to the caller when the procedure returns.
DELIMITER ||
CREATE FUNCTION functionname()
RETURNS INT
BEGIN
RETURN 12;
END;
||
DELIMITER ;
The first line defines what the delimiter character(DELIMITER ||) is to be changed to, this is needed to be set before
a function is created otherwise if left it at its default ; then the first ; that is found in the function body will be taken
as the end of the CREATE statement, which is usually not what is desired.
After the CREATE FUNCTION has run you should set the delimiter back to its default of ; as is seen after the function
code in the above example (DELIMITER ;).
SELECT functionname();
+----------------+
| functionname() |
+----------------+
| 12 |
+----------------+
A slightly more complex (but still trivial) example takes a parameter and adds a constant to it:
DELIMITER $$
CREATE FUNCTION add_2 ( my_arg INT )
RETURNS INT
BEGIN
RETURN (my_arg + 2);
END;
$$
DELIMITER ;
SELECT add_2(12);
+-----------+
| add_2(12) |
+-----------+
| 14 |
+-----------+
Note the use of a different argument to the DELIMITER directive. You can actually use any character sequence that
does not appear in the CREATE statement body, but the usual practice is to use a doubled non-alphanumeric
character such as \\, || or $$.
It is good practice to always change the parameter before and after a function, procedure or trigger creation or
update as some GUI's don't require the delimiter to change whereas running queries via the command line always
require the delimiter to be set.
Let's say we sell products of some types. We want to count how many products of each type are exists.
Our data:
);
CREATE TABLE product_type
(
name VARCHAR(50) NOT NULL PRIMARY KEY
);
CREATE TABLE product_type_count
(
type VARCHAR(50) NOT NULL PRIMARY KEY,
count INT(10) UNSIGNED NOT NULL DEFAULT 0
);
We may achieve the goal using stored procedure with using cursor:
DELIMITER //
DROP PROCEDURE IF EXISTS product_count;
CREATE PROCEDURE product_count()
BEGIN
DECLARE p_type VARCHAR(255);
DECLARE p_count INT(10) UNSIGNED;
DECLARE done INT DEFAULT 0;
DECLARE product CURSOR FOR
SELECT
type,
COUNT(*)
FROM product
GROUP BY type;
DECLARE CONTINUE HANDLER FOR SQLSTATE '02000' SET done = 1;
TRUNCATE product_type;
OPEN product;
CLOSE product;
END //
DELIMITER ;
CALL product_count();
type | count
----------------
dress | 2
food | 3
While that is a good example of a CURSOR, notice how the entire body of the procedure can be replaced by just
Main notes:
Starts with 1 and increments by 1 automatically when you fail to specify it on INSERT, or specify it as NULL.
The ids are always distinct from each other, but...
Do not make any assumptions (no gaps, consecutively generated, not reused, etc) about the values of the id
other than being unique at any given instant.
Subtle notes:
Note: The order is important! If the search query does not include both columns in the WHERE clause, it can only use
the leftmost index. In this case, a query with mycol in the WHERE will use the index, a query searching for myothercol
without also searching for mycol will not. For more information check out this blog post.
Note: Due to the way BTREE's work, columns that are usually queried in ranges should go in the rightmost value.
For example, DATETIME columns are usualy queried like WHERE datecol > '2016-01-01 00:00:00'. BTREE indexes
handle ranges very efficiently but only if the column being queried as a range is the last one in the composite index.
Given a table named book with columns named ISBN, 'Title', and 'Author', this finds books matching the terms
'Database Programming'. It shows the best matches first.
For this to work, a fulltext index on the Title column must be available:
Given a table named book with columns named ISBN, Title, and Author, this searches for books with the words
'Database' and 'Programming' in the title, but not the word 'Java'.
For this to work, a fulltext index on the Title column must be available:
Given a table named book with columns named ISBN, Title, and Author, this finds books matching the terms 'Date
Database Programming'. It shows the best matches first. The best matches include books written by Prof. C. J. Date.
(But, one of the best matches is also The Date Doctor's Guide to Dating : How to Get from First Date to Perfect Mate. This
shows up a limitation of FULLTEXT search: it doesn't pretend to understand such things as parts of speech or the
meaning of the indexed words.)
For this to work, a fulltext index on the Title and Author columns must be available:
Result:
+------------+
| hypotenuse |
+------------+
| 10 |
+------------+
Finally,
Notes:
That's simple as it can get but note that because JSON dictionary keys have to be surrounded by double quotes the
entire thing should be wrapped in single quotes. If the query succeeds, the data will be stored in a binary format.
UPDATE
myjson
SET
dict=JSON_ARRAY_APPEND(dict,'$.variations','scheveningen')
WHERE
id = 2;
Notes:
1. The $.variations array in our json dictionary. The $ symbol represents the json documentation. For a full
explaination of json paths recognized by mysql refer to
https://dev.mysql.com/doc/refman/5.7/en/json-path-syntax.html
2. Since we don't yet have an example on querying using json fields, this example uses the primary key.
+----+-----------------------------------------------------------------------------------------+
| id | dict |
+---+-----------------------------------------------------------------------------------------+
| 2 | {"opening": "Sicilian", "variations": ["pelikan", "dragon", "najdorf", "scheveningen"]} |
+----+-----------------------------------------------------------------------------------------+
1 row in set (0.00 sec)
Note, once again, that you need to be careful with the use of single and double quotes. The whole thing has to be
wrapped in single quotes.
MySQL 5.7.8+ supports native JSON type. While you have different ways to create json objects, you can access and
read members in different ways, too.
Main function is JSON_EXTRACT, hence -> and ->> operators are more friendly.
SELECT
JSON_EXTRACT( @myjson , '$[1]' ) ,
JSON_EXTRACT( @myjson , '$[*].label') ,
JSON_EXTRACT( @myjson , '$[1].*' ) ,
JSON_EXTRACT( @myjson , '$[2].*')
;
-- result values:
'\"B\"', '[\"C\"]', NULL, '[1, \"C\"]'
-- visually:
"B", ["C"], NULL, [1, "C"]
SELECT
myjson_col->>'$[1]' , myjson_col->'$[1]' ,
myjson_col->>'$[*].label' ,
myjson_col->>'$[1].*' ,
myjson_col->>'$[2].*'
FROM tablename ;
-- visuall:
B, "B" , ["C"], NULL, [1, "C"]
--^^^ ^^^
As with ->, the ->> operator is always expanded in the output of EXPLAIN, as the following example
demonstrates:
No other sessions can access the tables involved while RENAME TABLE executes, so the rename operation is not
subject to concurrency problems.
Atomic Rename is especially for completely reloading a table without waiting for DELETE and load to finish:
To DROP database as a SQL Script (you will need DROP privilege on that database):
or
Create Trigger
The CREATE TRIGGER statement creates a trigger named ins_sum that is associated with the account table. It also
includes clauses that specify the trigger action time, the triggering event, and what to do when the trigger activates
Insert Value
To use the trigger, set the accumulator variable (@sum) to zero, execute an INSERT statement, and then see what
value the variable has afterward:
+-----------------------+
| Total amount inserted |
+-----------------------+
| 1852.48 |
+-----------------------+
In this case, the value of @sum after the INSERT statement has executed is 14.98 + 1937.50 - 100, or 1852.48.
Drop Trigger
If you drop a table, any triggers for the table are also dropped.
Triggering event
INSERT
UPDATE
DELETE
$$
DELIMITER ;
$$
DELIMITER ;
$$
DELIMITER ;
Once your database becomes non-trivial, it is advisable to set the following parameters:
innodb_buffer_pool_size
This should be set to about 70% of available RAM (if you have at least 4GB of RAM; a smaller percentage if you have
a tiny VM or antique machine). The setting controls the amount of cache used by the InnoDB ENGINE. Hence, it is
very important for performance of InnoDB.
max_allowed_packet = 10M
M is Mb, G in Gb, K in Kb
Setting the GLOBAL variable will ensure a permanent change, whereas setting the SESSION variable will set the value
for the current session.
default_storage_engine = InnoDB
query_cache_type = 0
innodb_file_per_table = 1
innodb_flush_neighbors = 0
Concurrency
Make sure we can create more than than the default 4 threads by setting innodb_thread_concurrency to infinity
(0); this lets InnoDB decide based on optimal execution.
innodb_thread_concurrency = 0
innodb_read_io_threads = 64
innodb_write_io_threads = 64
Set the capacity (normal load) and capacity_max (absolute maximum) of IOPS for MySQL. The default of 200 is fine
for HDDs, but these days, with SSDs capable of thousands of IOPS, you are likely to want to adjust this number.
There are many tests you can run to determine IOPS. The values above should be nearly that limit if you are running
a dedicated MySQL server. If you are running any other services on the same machine, you should apportion as
appropriate.
innodb_io_capacity = 2500
innodb_io_capacity_max = 3000
RAM utilization
Set the RAM available to MySQL. Whilst the rule of thumb is 70-80%, this really depends on whether or not your
instance is dedicated to MySQL, and how much RAM is available. Don't waste RAM (i.e. resources) if you have a lot
available.
innodb_buffer_pool_size = 10G
block_encryption_mode = aes-256-cbc
To save time in debugging Event-related problems, keep in mind that the global event handler must be turned on to
process events.
+-----------------+-------+
| Variable_name | Value |
+-----------------+-------+
| event_scheduler | OFF |
+-----------------+-------+
The above inserts are provided to show a starting point. Note that the 2 events created below will clean out rows.
Ignore what they are actually doing (playing against one another). The point is on the INTERVAL and scheduling.
END$$
DELIMITER ;
ON COMPLETION PRESERVE -- When the event is done processing, retain it. Otherwise, it is deleted.
Events are like triggers. They are not called by a user's program. Rather, they are scheduled. As such, they succeed
The link to the Manual Page shows quite a bit of flexibilty with interval choices, shown below:
interval:
Events are powerful mechanisms that handle recurring and scheduled tasks for your system. They may contain as
many statements, DDL and DML routines, and complicated joins as you may reasonably wish. Please see the
MySQL Manual Page entitled Restrictions on Stored Programs.
type ENUM('fish','mammal','bird')
An alternative is
Notes
As with all cases of MODIFY COLUMN, you must include NOT NULL, and any other qualifiers that originally
existed, else they will be lost.
If you add to the end of the list and the list is under 256 items, the ALTER is done by merely changing the
schema. That is there will not be a lengthy table copy. (Old versions of MySQL did not have this optimization.)
mysql>SHOW WARNINGS;
+---------+------+--------------------------------------------+
| Level | Code | Message |
+---------+------+--------------------------------------------+
| Warning | 1048 | Column 'e' cannot be null |
| Warning | 1265 | Data truncated for column 'e' at row 4 |
| Warning | 1265 | Data truncated for column 'enull' at row 4 |
+---------+------+--------------------------------------------+
3 rows in set (0.00 sec)
What is in the table after those inserts. This uses "+0" to cast to numeric see what is stored.
+-----+-----+
| e | e+0 |
+-----+-----+
| yes | 1 |
| no | 2 |
| | 0 | -- NULL
| | 0 | -- 'bad-value'
+-----+-----+
4 rows in set (0.00 sec)
+-------+---------+
| enull | enull+0 |
+-------+---------+
| x | 1 |
| y | 2 |
| NULL | NULL |
| | 0 | -- 'bad-value'
+-------+---------+
4 rows in set (0.00 sec)
Note: If you want to use same container for all your projects, you should create a PATH in your HOME_PATH. If you
want to create it for every project you could create a docker directory in your project.
version: '2'
services:
cabin_db:
image: mysql:latest
volumes:
- "./.mysql-data/db:/var/lib/mysql"
restart: always
ports:
- 3306:3306
environment:
MYSQL_ROOT_PASSWORD: rootpw
MYSQL_DATABASE: cabin
MYSQL_USER: cabin
MYSQL_PASSWORD: cabinpw
cd PATH_TO_DOCKER-COMPOSE.YML
docker-compose up -d
Hurray!!
docker-compose stop
Best practice...
Use utf8mb4 for any TEXT or VARCHAR column that can have a variety of languages in it.
Use ascii (latin1 is ok) for hex strings (UUID, MD5, etc) and simple codes (country_code, postal_code, etc).
utf8mb4 did not exist until version 5.5.3, so utf8 was the best available before that.
Outside of MySQL, "UTF8" means the same things as MySQL's utf8mb4, not MySQL's utf8.
Collations start with the charset name and usually end with _ci for "case and accent insensitive" or _bin for "simply
compare the bits.
The 'latest' utf8mb4 collation is utf8mb4_unicode_520_ci, based on Unicode 5.20. If you are working with a single
language, you might want, say, utf8mb4_polish_ci, which will rearrange the letters slightly, based on Polish
conventions.
City and Country will use UTF8, as we set that as the default character set for the table. Street on the other hand
will use ASCII, as we've specifically told it to do so.
Setting the right character set is highly dependent on your dataset, but can also highly improve portability between
systems working with your data.
Each language (PHP, Python, Java, ...) has its own way the it usually preferable to SET NAMES.
For example: SET NAMES utf8mb4, together with a column declared CHARACTER SET latin1 -- this will convert from
latin1 to utf8mb4 when INSERTing and convert back when SELECTing.
This converts the table, but does not take care of any differences between the engines. Most differences will not
matter, especially for small tables. But for busier tables, other considerations should be considered. Conversion
considerations
NOTE: You should be connected to your database for DATABASE() function to work, otherwise it will
return NULL. This mostly applies to standard mysql client shipped with server as it allows to connect
without specifying a database.
Run this SQL statement to retrieve all the MyISAM tables in your database.
Finally, copy the output and execute SQL queries from it.
In other words, a transaction will never be complete unless each individual operation within the group is successful.
If any operation within the transaction fails, the entire transaction will fail.
Bank transaction will be best example for explaining this. Consider a transfer between two accounts. To achieve
this you have to write SQL statements that do the following
If anyone these process fails, the whole should be reverted to their previous state.
Atomicity: ensures that all operations within the work unit are completed successfully; otherwise, the
transaction is aborted at the point of failure, and previous operations are rolled back to their former state.
Consistency: ensures that the database properly changes states upon a successfully committed transaction.
Isolation: enables transactions to operate independently of and transparent to each other.
Durability: ensures that the result or effect of a committed transaction persists in case of a system failure.
Transactions begin with the statement START TRANSACTION or BEGIN WORK and end with either a COMMIT or a
ROLLBACK statement. The SQL commands between the beginning and ending statements form the bulk of the
transaction.
START TRANSACTION;
SET @transAmt = '500';
SELECT @availableAmt:=ledgerAmt FROM accTable WHERE customerId=1 FOR UPDATE;
UPDATE accTable SET ledgerAmt=ledgerAmt-@transAmt WHERE customerId=1;
UPDATE accTable SET ledgerAmt=ledgerAmt+@transAmt WHERE customerId=2;
COMMIT;
With START TRANSACTION, autocommit remains disabled until you end the transaction with COMMIT or ROLLBACK. The
autocommit mode then reverts to its previous state.
The FOR UPDATE indicates (and locks) the row(s) for the duration of the transaction.
While the transaction remains uncommitted, this transaction will not be available for others users.
MySQL automatically commits statements that are not part of a transaction. The results of any UPDATE,DELETE or
INSERT statement not preceded with a BEGIN or START TRANSACTION will immediately be visible to all connections.
The AUTOCOMMIT variable is set true by default. This can be changed in the following way,
SELECT @@autocommit;
COMMIT
If AUTOCOMMIT set to false and the transaction not committed, the changes will be visible only for the current
connection.
After COMMIT statement commits the changes to the table, the result will be visible for all connections.
Connection 1
Connection 2
Connection 1
mysql> COMMIT;
--->Now COMMIT is executed in connection 1
mysql> SELECT * FROM testTable;
+-----+
| tId |
+-----+
| 1 |
| 2 |
| 3 |
+-----+
Connection 2
ROLLBACK
If anything went wrong in your query execution, ROLLBACK in used to revert the changes. See the explanation below
Class.forName("com.mysql.jdbc.Driver");
Connection con = DriverManager.getConnection(DB_CONNECTION_URL,DB_USER,USER_PASSWORD);
--->Example for connection url "jdbc:mysql://localhost:3306/testDB");
Character Sets : This indicates what character set the client will use to send SQL statements to the server. It also
specifies the character set that the server should use for sending results back to the client.
This should be mentioned while creating connection to server. So the connection string should be like,
jdbc:mysql://localhost:3306/testDB?useUnicode=true&characterEncoding=utf8
See this for more details about Character Sets and Collations
When you open connection, the AUTOCOMMIT mode is set to true by default, that should be changed false to start
transaction.
You should always call setAutoCommit() method right after you open a connection.
Otherwise use START TRANSACTION or BEGIN WORK to start a new transaction. By using START TRANSACTION or BEGIN
WORK, no need to change AUTOCOMMIT false. That will be automatically disabled.
Now you can start transaction. See a complete JDBC transaction example below.
package jdbcTest;
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.SQLException;
try {
String DB_CONNECTION_URL =
"jdbc:mysql://localhost:3306/testDB?useUnicode=true&characterEncoding=utf8";
Class.forName("com.mysql.jdbc.Driver");
con = DriverManager.getConnection(DB_CONNECTION_URL,DB_USER,USER_PASSWORD);
JDBC transaction make sure of all SQL statements within a transaction block are executed successful, if either one
of the SQL statement within transaction block is failed, abort and rollback everything within the transaction block.
SELECT @@long_query_time;
+-------------------+
| @@long_query_time |
+-------------------+
| 10.000000 |
+-------------------+
It can be set as a GLOBAL variable, in my.cnf or my.ini file. Or it can be set by the connection, though this is
unusual. The value can be set between 0 to 10 (seconds). What value to use?
The capturing of slow queries is either turned on or off. And the file logged to is also specified. The below captures
these concepts:
For more information, please see the MySQL Manual Page The Slow Query Log
Note: The above information on turning on/off the slowlog was changed in 5.6(?); older version had another
mechanism.
long_query_time=...
turn on the slowlog
run for a few hours
turn off the slowlog (or raise the cutoff)
run pt-query-digest to find the 'worst' couple of queries. Or mysqldumpslow -s t
See the variables basedir and datadir for default location for many logs
Some logs are turned on/off by other VARIABLES. Some are either written to a file or to a table.
Documenters: please include the default location and name for each log type, for both Windows and *nix. (Or at
least as much as you can.)
If the fullpath to the file is not shown, the file exists in the datadir.
Windows example:
+----------------------------------------------------------+
| @@general_log_file |
+----------------------------------------------------------+
| C:\ProgramData\MySQL\MySQL Server 5.7\Data\GuySmiley.log |
+----------------------------------------------------------+
Linux:
+-----------------------------------+
| @@general_log_file |
+-----------------------------------+
| /var/lib/mysql/ip-ww-xx-yy-zz.log |
+-----------------------------------+
When changes are made to the general_log_file GLOBAL variable, the new log is saved in the datadir. However,
the fullpath may no longer be reflected by examining the variable.
Best practices are to turn OFF capture. Save the log file to a backup directory with a filename reflecting the
begin/end datetime of the capture. Deleting the prior file if a filesystem move did not occur of that file. Establish a
new filename for the log file and turn capture ON (all show below). Best practices also include a careful
determination if you even want to capture at the moment. Typically, capture is ON for debugging purposes only.
/LogBackup/GeneralLog_20160802_1520_to_20160802_1815.log
where the date and time are part to the filename as a range.
Linux is similar. These would represent dynamic changes. Any restart of the server would pick up configuration file
settings.
As for the configuration file, consider the following relevant variable settings:
[mysqld]
general_log_file = /path/to/currentquery.log
general_log = 1
In addition, the variable log_output can be configured for TABLE output, not just FILE. For that, please see
Destinations.
Please see the MySQL Manual Page The General Query Log.
The variable log_error holds the path to the log file for error logging.
In the absence of a configuration file entry for log_error, the system will default its values to @@hostname.err in the
datadir. Note that log_error is not a dynamic variable. As such, changes are done through a cnf or ini file changes
and a server restart (or by seeing "Flushing and Renaming the Error Log File" in the Manual Page link at the bottom
here).
The GLOBAL variable log_warnings sets the level for verbosity which varies by server version. The following snippet
illustrates:
Configuration file changes in cnf and ini files might look like the following.
[mysqld]
log_error = /path/to/CurrentError.log
log_warnings = 2
MySQL 5.7.2 expanded the warning level verbosity to 3 and added the GLOBAL log_error_verbosity. Again, it was
introduced in 5.7.2. It can be set dynamically and checked as a variable or set via cnf or ini configuration file
settings.
As of MySQL 5.7.2:
[mysqld]
log_error = /path/to/CurrentError.log
log_warnings = 2
log_error_verbosity = 3
Please see the MySQL Manual Page entitled The Error Log especially for Flushing and Renaming the Error Log file,
and its Error Log Verbosity section with versions related to log_warnings and error_log_verbosity.
This table can be partitioned by range in a number of ways, depending on your needs. One way would be to use the
store_id column. For instance, you might decide to partition the table 4 ways by adding a PARTITION BY RANGE
clause as shown here:
MAXVALUE represents an integer value that is always greater than the largest possible integer value (in
mathematical language, it serves as a least upper bound).
For the examples that follow, we assume that the basic definition of the table to be partitioned is provided by the
CREATE TABLE statement shown here:
Suppose that there are 20 video stores distributed among 4 franchises as shown in the following table.
To partition this table in such a way that rows for stores belonging to the same region are stored in the same
partition
The following statement creates a table that uses hashing on the store_id column and is divided into 4 partitions:
We are going to configure the Master that it should keep a log of every action performed on it. We are going to
configure the Slave server that it should look at the log on the Master and whenever changes happens in log on the
Master, it should do the same thing.
Master Configuration
First of all, we need to create a user on the Master. This user is going to be used by Slave to create a connection
with the Master.
Now my.inf (my.cnf in Linux) file should be edited. Include the following lines in [mysqld] section.
server-id = 1
log-bin = mysql-bin.log
binlog-do-db = your_database
The second line tells MySQL to start writing a log in the specified log file. In Linux this can be configured like log-bin
= /home/mysql/logs/mysql-bin.log. If you are starting replication in a MySQL server in which replication has
already been used, make sure this directory is empty of all replication logs.
The third line is used to configure the database for which we are going to write log. You should replace
your_database with your database name.
Make sure skip-networking has not been enabled and restart the MySQL server(Master)
Slave Configuration
my.inf file should be edited in Slave also. Include the following lines in [mysqld] section.
server-id = 2
master-host = master_ip_address
master-connect-retry = 60
master-user = user_name
master-password = user_password
replicate-do-db = your_database
relay-log = slave-relay.log
relay-log-index = slave-relay-log.index
The first line is used to assign an ID to this MySQL server. This ID should be unique.
The next two lines tell the username and password to the Slave, by using which it connect the Master.
The last two lines used to assign relay-log and relay-log-index file names.
Make sure skip-networking has not been enabled and restart the MySQL server(Slave)
If data is constantly being added to the Master, we will have to prevent all database access on the Master so
nothing can be added. This can be achieved by run the following statement in Master.
If no data is being added to the server, you can skip the above step.
Change your_database and backup directory according to your setup. You wll now have a file called backup.sql in
the given location.
If your database not exists in your Slave, create that by executing the following
Start Replication
To start replication, we need to find the log file name and log position in the Master. So, run the following in Master
+---------------------+----------+-------------------------------+------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+---------------------+----------+-------------------------------+------------------+
| mysql-bin.000001 | 130 | your_database | |
+---------------------+----------+-------------------------------+------------------+
SLAVE STOP;
CHANGE MASTER TO MASTER_HOST='master_ip_address', MASTER_USER='user_name',
First we stop the Slave. Then we tell it exactly where to look in the Master log file. For MASTER_LOG_FILE name and
MASTER_LOG_POS, use the values which we got by running SHOW MASTER STATUS command on the Master.
You should change the I.P of the Master in MASTER_HOST, and change the user and password accordingly.
The Slave will now be waiting. The status of the Slave can be viewed by run the following
If you previously executed FLUSH TABLES WITH READ LOCK in Master, release the tables from lock by run the
following
UNLOCK TABLES;
Now the Master keep a log for every action performed on it and the Slave server look at the log on the Master.
Whenever changes happens in log on the Master, Slave replicate that.
To skip just one query that is hanging the slave, use the following syntax
This statement skips the next N events from the master. This statement is valid only when the slave threads are not
running. Otherwise, it produces an error.
STOP SLAVE;
SET GLOBAL sql_slave_skip_counter=1;
START SLAVE;
In some cases this is fine. But if the statement is part of a multi-statement transaction, it becomes more complex,
because skipping the error producing statement will cause the whole transaction to be skipped.
If you want to skip more queries which producing same error code and if you are sure that skipping those errors
will not bring your slave inconsistent and you want to skip them all, you would add a line to skip that error code in
your my.cnf.
For example you might want to skip all duplicate errors you might be getting
slave-skip-errors = 1062
You can skip also other type of errors or all error codes, but make sure that skipping those errors will not bring your
slave inconsistent. The following are the syntax and examples
slave-skip-errors=1062,1053
slave-skip-errors=all
slave-skip-errors=ddl_exist_errors
If you need to specify the password on the command line (e.g. in a script), you can add it after the -p option without
a space:
If you password contains spaces or special characters, remember to use escaping depending on your shell / system.
(Explicity specifying the password on the commandline is Not Recommended due to security concerns.)
The file extension .sql is fully a matter of style. Any extension would work.
Note that:
Alternatively, when in the MySQL Command line tool, you can restore (or run any other script) by using the source
command:
source filename.sql
or
\. filename.sql
Option 1:
Option 2:
If the destination server can connect to the host server, you can use a pipeline to copy the database from one
Similarly, the script could be run on the source server, pushing to the destination. In either case, it is likely to be
significantly faster than Option 1.
Important: If you don't want to lock up the source db, you should also include --lock-tables=false. But you may
not get an internally consistent db image that way.
When using --routines the creation and change time stamps are not maintained, instead you should dump and
reload the contents of mysql.proc.
You are prompted for the password, after which the backup starts.
1 \t Arthur Dent
2 \t Marvin
3 \t Zaphod Beeblebrox
1|Arthur Dent
2|Marvin
3|Zaphod Beeblebrox
id Name
3 Yooden Vranx
1 \t Arthur Dent
2 \t Marvin
3 \t Zaphod Beeblebrox
1;max;male;manager;12-7-1985
2;jack;male;executive;21-8-1990
.
.
.
1000000;marta;female;accountant;15-6-1992
1;max;male;manager;17-Jan-1985
2;jack;male;executive;01-Feb-1992
.
.
.
1000000;marta;female;accountant;25-Apr-1993
In this case you can change the format of the dob column before inserting like this.
This example of LOAD DATA INFILE does not specify all the available features.
If this option has been enabled in your server, it can be used to load a file that exists on the client computer rather
than the server. A side effect is that duplicate rows for unique values are ignored.
When the replace keyword is used duplicate unique or primary keys will result in the existing row being replaced
with new ones
The opposite of REPLACE, existing rows will be preserved and new ones ignored. This behavior is similar to LOCAL
described above. However the file need not exist on the client computer.
Sometimes ignoring or replacing all duplicates may not be the ideal option. You may need to make decisions based
on the contents of other columns. In that case the best option is to load into an intermediary table and transfer
from there.
Query: (To selects all the different cities (only distinct values) from the "Customers" and the "Suppliers" tables)
Result:
Number of Records: 10
City
------
Aachen
Albuquerque
Anchorage
Annecy
Barcelona
Barquisimeto
Bend
Bergamo
Berlin
Bern
Query:
Result:
Number of Records: 12
City
-------
Aachen
Albuquerque
Anchorage
Ann Arbor
Annecy
Barcelona
Barquisimeto
Bend
Bergamo
Query:
Result:
Number of Records: 14
City Country
Aachen Germany
Berlin Germany
Berlin Germany
Brandenburg Germany
Cunewalde Germany
Cuxhaven Germany
Frankfurt Germany
Frankfurt a.M. Germany
Köln Germany
Leipzig Germany
Mannheim Germany
München Germany
Münster Germany
Stuttgart Germany
By omitting the password value MySQL will ask for any required password as the first input. If you specify password
the client will give you an 'insecure' warning:
For local connections --socket can be used to point to the socket file:
Omitting the socket parameter will cause the client to attempt to attach to a server on the local machine. The
server must be running to connect to it.
+----+-------+--------+
| id | name | gender |
+----+-------+--------+
id name gender
1 Kathy f
2 John m
1 Kathy f
2 John m
Temporary table will be automatically destroyed when the session ends or connection is closed. The user can also
drop temporary table.
Same temporary table name can be used in many connections at the same time, because the temporary table is
only available and accessible by the client who creates that table.
IF NOT EXISTS key word can be used as mentioned below to avoid 'table already exists' error. But in that case table
will not be created, if the table name which you are using already exists in your current session.
Use IF EXISTS to prevent an error occurring for tables that may not exist
[mysql]
prompt = '\u@\h [\d]> '
Consider the following table containing job applicants, the companies they worked for, and the date they left the
company. NULL indicates that an applicant still works at the company:
+--------------+-----------------+------------+
| applicant_id | company_name | end_date |
+--------------+-----------------+------------+
| 1 | Google | NULL |
| 1 | Initech | 2013-01-31 |
| 2 | Woodworking.com | 2016-08-25 |
| 2 | NY Times | 2013-11-10 |
| 3 | NFL.com | 2014-04-13 |
+--------------+-----------------+------------+
Your task is to compose a query that returns all rows after 2016-01-01, including any employees that are still
working at a company (those with NULL end dates). This select statement:
+--------------+-----------------+------------+
| applicant_id | company_name | end_date |
+--------------+-----------------+------------+
| 2 | Woodworking.com | 2016-08-25 |
+--------------+-----------------+------------+
Per the MySQL documentation, comparisons using the arithmetic operators <, >, =, and <> themselves return NULL
instead of a boolean TRUE or FALSE. Thus a row with a NULL end_date is neither greater than 2016-01-01 nor less
than 2016-01-01.
+--------------+-----------------+------------+
| applicant_id | company_name | end_date |
+--------------+-----------------+------------+
| 1 | Google | NULL |
| 2 | Woodworking.com | 2016-08-25 |
+--------------+-----------------+------------+
Working with NULLs becomes more complex when the task involves aggregation functions like MAX() and a GROUP
+--------------+---------------+
| applicant_id | MAX(end_date) |
+--------------+---------------+
| 1 | 2013-01-31 |
| 2 | 2016-08-25 |
| 3 | 2014-04-13 |
+--------------+---------------+
However, knowing that NULL indicates an applicant is still employed at a company, the first row of the result is
inaccurate. Using CASE WHEN provides a workaround for the NULL issue:
SELECT
applicant_id,
CASE WHEN MAX(end_date is null) = 1 THEN 'present' ELSE MAX(end_date) END
max_date
FROM example
GROUP BY applicant_id;
+--------------+------------+
| applicant_id | max_date |
+--------------+------------+
| 1 | present |
| 2 | 2016-08-25 |
| 3 | 2014-04-13 |
+--------------+------------+
This result can be joined back to the original example table to determine the company at which an applicant last
worked:
SELECT
data.applicant_id,
data.company_name,
data.max_date
FROM (
SELECT
*,
CASE WHEN end_date is null THEN 'present' ELSE end_date END max_date
FROM example
) data
INNER JOIN (
SELECT
applicant_id,
CASE WHEN MAX(end_date is null) = 1 THEN 'present' ELSE MAX(end_date) END max_date
FROM
example
GROUP BY applicant_id
) j
ON data.applicant_id = j.applicant_id AND data.max_date = j.max_date;
+--------------+-----------------+------------+
| applicant_id | company_name | max_date |
+--------------+-----------------+------------+
These are just a few examples of working with NULL values in MySQL.
Connection:
default_charset UTF-8
<form accept-charset="UTF-8">
$t = json_encode($s, JSON_UNESCAPED_UNICODE);
Section 59.2: Get the current time in a form that looks like a
Javascript timestamp
Javascript timestamps are based on the venerable UNIX time_t data type, and show the number of milliseconds
since 1970-01-01 00:00:00 UTC.
This expression gets the current time as a Javascript timestamp integer. (It does so correctly regardless of the
current time_zone setting.)
ROUND(UNIX_TIMESTAMP(NOW(3)) * 1000.0, 0)
If you have TIMESTAMP values stored in a column, you can retrieve them as integer Javascript timestamps using the
UNIX_TIMESTAMP() function.
If your column contains DATETIME columns and you retrieve them as Javascript timestamps, those timestamps will
be offset by the time zone offset of the time zone they're stored in.
inserts a row containing NOW() values with millisecond precision into the table.
Notice that you must use NOW(3) rather than NOW() if you use that function to insert high-precision time values.
displays a value like 2016-11-19 09:52:53.248000 with fractional microseconds. Because we used NOW(3), the final
three digits in the fraction are 0.
FROM_UNIXTIME(1478960868932 * 0.001)
It's simple to use that kind of expression to store your Javascript timestamp into a MySQL table. Do this:
1:M is one-directional, that is, any time you query a 1:M relationship, you can use the 'one' row to select 'many'
rows in another table, but you cannot use a single 'many' row to select more than a single 'one' row.
EMPLOYEES
MANAGERS
Results in:
Ultimately, for every manager we query for, we will see 1 or more employees returned.
SELECT m.mgr_id , m.first_name , m.last_name FROM managers m INNER JOIN employees e ON e.mgr_id =
m.mgr_id WHERE e.emp_id = 'E03' ;
As this is the inverse of the above example, we know that for every employee we query for, we will only ever see
SHOW VARIABLES;
You can specify if you want the session variables or the global variables as follows:
Session variables:
Global variables:
Like any other SQL command you can add parameters to your query such as the LIKE command:
You can also filter the results of the SHOW query using a WHERE parameter as follows:
SHOW STATUS;
You can specify whether you wish to receive the SESSION or GLOBAL status of your sever like so: Session status:
Global status:
Like any other SQL command you can add parameters to your query such as the LIKE command:
The main difference between GLOBAL and SESSION is that with the GLOBAL modifier the command displays
aggregated information about the server and all of it's connections, while the SESSION modifier will only show the
values for the current connection.
mkdir /home/ubuntu/mysqlcerts
cd /home/ubuntu/mysqlcerts
To generate keys, create a certificate authority (CA) to sign the keys (self-signed):
The values entered at each prompt won't affect the configuration. Next create a key for the server, and sign using
the CA from before:
openssl req -newkey rsa:2048 -days 3600 -nodes -keyout server-key.pem -out server-req.pem
openssl rsa -in server-key.pem -out server-key.pem
openssl x509 -req -in server-req.pem -days 3600 -CA ca.pem -CAkey ca-key.pem -set_serial 01 -out
server-cert.pem
openssl req -newkey rsa:2048 -days 3600 -nodes -keyout client-key.pem -out client-req.pem
openssl rsa -in client-key.pem -out client-key.pem
openssl x509 -req -in client-req.pem -days 3600 -CA ca.pem -CAkey ca-key.pem -set_serial 01 -out
client-cert.pem
vim /etc/mysql/mysql.conf.d/mysqld.cnf
ssl-ca = /home/ubuntu/mysqlcerts/ca.pem
ssl-cert = /home/ubuntu/mysqlcerts/server-cert.pem
ssl-key = /home/ubuntu/mysqlcerts/server-key.pem
Connect in the same way, passing in the extra options ssl-ca, ssl-cert, and ssl-key, using the generated client
key. For example, assuming cd /home/ubuntu/mysqlcerts:
Enforcing SSL
If you don't want to manage client keys, use the client key from earlier and automatically use that for all clients.
Open MySQL configuration file, for example:
vim /etc/mysql/mysql.conf.d/mysqld.cnf
ssl-ca = /home/ubuntu/mysqlcerts/ca.pem
ssl-cert = /home/ubuntu/mysqlcerts/client-cert.pem
ssl-key = /home/ubuntu/mysqlcerts/client-key.pem
Now superman only has to type the following to login via SSL:
Connecting from another program, for example in Python, typically only requires an additional parameter to the
connect function. A Python example:
import MySQLdb
ssl = {'cert': '/home/ubuntu/mysqlcerts/client-cert.pem', 'key': '/home/ubuntu/mysqlcerts/client-
key.pem'}
conn = MySQLdb.connect(host='127.0.0.1', user='superman', passwd='imsoawesome', ssl=ssl)
https://www.percona.com/blog/2013/06/22/setting-up-mysql-ssl-and-secure-connections/
https://lowendbox.com/blog/getting-started-with-mysql-over-ssl/
http://xmodulo.com/enable-ssl-mysql-server-client.html
https://ubuntuforums.org/showthread.php?t=1121458
Directory path assumes CentOS or RHEL (adjust as needed for other distros):
mkdir /etc/pki/tls/certs/mysql/
Be sure to set permissions on the folder and files. mysql needs full ownership and access.
# vi /etc/my.cnf
# i
[mysqld]
Then
Don't forget to open your firewall to allow connections from appclient (using IP 1.2.3.4)
mysql -uroot -p
Issue the following to create a user for the client. note REQUIRE SSL in GRANT statement.
You should still be in /root/certs/mysql from the first step. If not, cd back to it for one of the commands below.
openssl req -sha1 -newkey rsa:2048 -days 730 -nodes -keyout client-key.pem > client-req.pem
openssl rsa -in client-key.pem -out client-key.pem
openssl x509 -sha1 -req -in client-req.pem -days 730 -CA ca-cert.pem -CAkey ca-key.pem -set_serial
01 > client-cert.pem
Note: I used the same common name for both server and client certificates. YMMV.
cat ca.pem
ssh appclient
mkdir /etc/pki/tls/certs/mysql/
Now, place the client certificates (created on dbserver) on appclient. You can either scp them over, or just copy and
paste the files one by one.
scp dbserver
# copy files from dbserver to appclient
# exit scp
Again, be sure to set permissions on the folder and files. mysql needs full ownership and access.
/etc/pki/tls/certs/mysql/ca.pem
/etc/pki/tls/certs/mysql/client-cert.pem
/etc/pki/tls/certs/mysql/client-key.pem
vi /etc/my.cnf
# i
[client]
ssl-ca=/etc/pki/tls/certs/mysql/ca.pem
ssl-cert=/etc/pki/tls/certs/mysql/client-cert.pem
ssl-key=/etc/pki/tls/certs/mysql/client-key.pem
# :wq
mysql -uroot -p
Initially I saw
have_openssl NO
The problem was that root owned client-cert.pem and the containing folder. The solution was to set ownership of
/etc/pki/tls/certs/mysql/ to mysql.
Attempt to connect to dbserver's mysql instance using the account created above.
To confirm you are connected with SSL enabled, issue the following command from the MariaDB/MySQL prompt:
\s
That will show the status of your connection, which should look something like this:
Connection id: 4
Current database:
Current user: iamsecure@appclient
SSL: Cipher in use is DHE-RSA-AES256-GCM-SHA384
Current pager: stdout
Using outfile: ''
Using delimiter: ;
Server: MariaDB
Server version: 5.X.X-MariaDB MariaDB Server
Protocol version: 10
Connection: dbserver via TCP/IP
Server characterset: latin1
Db characterset: latin1
Client characterset: utf8
Conn. characterset: utf8
TCP port: 3306
Uptime: 42 min 13 sec
If you get permission denied errors on your connection attempt, check your GRANT statement above to make sure
there aren't any stray characters or ' marks.
If you have SSL errors, go back through this guide to make sure the steps are orderly.
This worked on RHEL7 and will likely work on CentOS7, too. Cannot confirm whether these exact steps will work
elsewhere.
$ mysql -u root -p
Here, We have successfully created new user, But this user won't have any permissions, So to assign permissions
to user use following command :
However for situations where is not advisable to hard-code the password in cleartext it is also possible to specify
directly, using the directive PASSWORD, the hashed value as returned by the PASSWORD() function:
That prevents access from other servers. You should hand out SUPER to very few people, and they should be aware
of their responsibility. The application should not have SUPER.
That way, someone who hacks into the application code can't get past dbname. This can be further refined via
either of these:
As you say, there is no absolute security. My point here is there you can do a few things to slow hackers down.
(Same goes for honest people goofing.)
In rare cases, you may need the application to do something available only to root. this can be done via a "Stored
Procedure" that has SECURITY DEFINER (and root defines it). That will expose only what the SP does, which might,
for example, be one particular action on one particular table.
Using localhost relies on the security of the server. For best practice root should only be allowed in through
localhost. In some cases, these mean the same thing: 0.0.0.1 and ::1.
in Ubuntu or Debian:
sudo /etc/init.d/mysql stop
in CentOS, Fedora or Red Hat Enterprise Linux:
sudo /etc/init.d/mysqld stop
mysql -u root
in Ubuntu or Debian:
sudo /etc/init.d/mysql stop
sudo /etc/init.d/mysql start
in CentOS, Fedora or Red Hat Enterprise Linux:
sudo /etc/init.d/mysqld stop
sudo /etc/init.d/mysqld start
C:\> cd C:\mysql\bin
Note: this will work only if you are physically on the same server.
Change the root password as soon as possible by logging in with the generated temporary password and set a
custom password for the superuser account:
Note: MySQL's validate_password plugin is installed by default. This will require that passwords contain at least one
upper case letter, one lower case letter, one digit, and one special character, and that the total password length is
at least 8 characters.
$ mysql -u root -p
Enter password:
ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
Then I used the code mysqld_safe --skip-grant-tables & but I get the error:
I solved:
$ mkdir -p /var/run/mysqld
$ chown mysql:mysql /var/run/mysqld
Now I use the same code mysqld_safe --skip-grant-tables & and get
Copyright (c) 2000, 2017, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of
their respective owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql>
Reading table information for completion of table and column names You can turn off this feature to get a quicker
startup with -A
Database changed
or If you have a mysql root account that can connect from everywhere, you should also do:
USE mysql
UPDATE user SET Password = PASSWORD('newpwd')
WHERE Host = 'localhost' AND User = 'root';
And if you have a root account that can access from everywhere:
USE mysql
UPDATE user SET Password = PASSWORD('newpwd')
WHERE Host = '%' AND User = 'root';`enter code here
FLUSH PRIVILEGES;
sudo /etc/init.d/mysql stop
sudo /etc/init.d/mysql start
now again ` mysql -u root -p' and use the new password to get
mysql>
Login:
mysql -u root
In Unix shell stop mySQL without grant tables, then restart with grant tables:
= column(s) from the WHERE clause first. (eg, INDEX(a,b,...) for WHERE a=12 AND b='xyz' ...)
IN column(s); the optimizer may be able to leapfrog through the index.
One "range" (eg x BETWEEN 3 AND 9, name LIKE 'J%') It won't use anything past the first range column.
All the columns in GROUP BY, in order
All the columns in ORDER BY, in order. Works only if all are ASC or all are DESC or you are using 8.0.
When using COMPACT row format (the default InnoDB format) and variable-length character sets, such as
utf8 or sjis, CHAR(N) columns occupy a variable amount of space, but still at least N bytes.
3. For tables that are big, or contain lots of repetitive text or numeric data, consider using COMPRESSED row
format. Less disk I/O is required to bring data into the buffer pool, or to perform full table scans. Before
making a permanent decision, measure the amount of compression you can achieve by using COMPRESSED
versus COMPACT row format. Caveat: Benchmarks rarely show better than 2:1 compression and there is a lot
of overhead in the buffer_pool for COMPRESSED.
4. Once your data reaches a stable size, or a growing table has increased by tens or some hundreds of
megabytes, consider using the OPTIMIZE TABLE statement to reorganize the table and compact any wasted
space. The reorganized tables require less disk I/O to perform full table scans. This is a straightforward
technique that can improve performance when other techniques such as improving index usage or tuning
application code are not practical. Caveat: Regardless of table size, OPTIMIZE TABLE should only rarely be
performed. This is because it is costly, and rarely improves the table enough to be worth it. InnoDB is
reasonably good at keeping its B+Trees free of a lot of wasted space.
This works for DATE, DATETIME, TIMESTAMP, and even DATETIME(6) (microseconds):
Section 69.2: OR
In general OR kills optimization.
WHERE a = 12 OR b = 78
cannot use INDEX(a,b), and may or may not use INDEX(a), INDEX(b) via "index merge". Index merge is better
than nothing, but only barely.
WHERE x = 3 OR x = 5
is turned into
WHERE x IN (3, 5)
The main lesson for a novice is to learn of "composite" indexes. Here's a quick example:
INDEX(last_name, first_name)
This correlated subquery is often pretty good. Note: It must return at most 1 value. It is often useful as an
alternative to, though not necessarily faster than, a LEFT JOIN.
SELECT ...
FROM ( SELECT ... ) AS a
JOIN b ON ...
SELECT ...
FROM a
JOIN b ON ...
WHERE ...
GROUP BY a.id
First, the JOIN expands the number of rows; then the GROUP BY whittles it back down the the number of rows in a.
To avoid such errors, either don't use reserved words as identifiers or wrap the offending identifier in backticks.
Error Code: 1064. You have an error in your SQL syntax; check the manual that corresponds to your
MySQL server version for the right syntax to use near 'order' at line 1
See also: Syntax error due to using a reserved word as a table or column name in MySQL.
0x49D1 Chapter 10
4thfloorstudios Chapter 12
a coder Chapter 62
Abhishek Aggrawal Chapters 30, 24 and 21
Abubakkar Chapter 20
Adam Chapter 14
agold Chapter 50
Alex Recarey Chapter 31
alex9311 Chapter 25
Alvaro Flaño Larrondo Chapter 6
Aman Dhanda Chapter 1
Aminadav Chapter 63
Andy Chapter 1
andygeers Chapter 25
Ani Menon Chapters 3 and 53
animuson Chapter 6
aries12 Chapter 52
arushi Chapter 68
Aryo Chapter 25
Asaph Chapters 50 and 52
Asjad Athick Chapter 3
Athafoud Chapter 1
BacLuc Chapter 67
Barranka Chapters 50, 25, 31 and 19
Batsu Chapters 50, 20, 11, 63, 51 and 54
Ben Visness Chapter 31
Benvorth Chapters 25, 16 and 3
Bhavin Solanki Chapter 3
bhrached Chapter 52
Bikash P Chapter 16
Blag Chapter 37
CGritton Chapter 10
ChintaMoney Chapter 38
Chip Chapter 3
Chris Chapter 12
CodeWarrior Chapters 1 and 38
CPHPython Chapters 6 and 25
dakab Chapter 2
Damian Yerrick Chapter 15
Darwin von Corax Chapters 25 and 30
Dinidu Chapter 10
Dipen Shah Chapter 1
Divya Chapter 24
Drew Chapters 7, 25, 10, 29, 30, 11, 21, 12, 3, 2, 39, 46 and 4
e4c5 Chapters 24, 52, 34 and 37
Eugene Chapters 38, 56 and 62
falsefive Chapter 60
Filipe Martins Chapter 14
Florian Genser Chapters 36 and 13