0% found this document useful (0 votes)
24 views46 pages

VISOR Robotic URCap Reference Manual

Uploaded by

suraj basnet
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views46 pages

VISOR Robotic URCap Reference Manual

Uploaded by

suraj basnet
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 46

Fehler! AutoText-Eintrag nicht definiert.

VISOR® Robotic URCap - Version 1.4.0


Reference Manual
VISOR® Robotic URCap Version 1.4.0: User Manual

SensoPart Industriesensorik GmbH and its subsidiaries are not liable for printing errors and mistakes, which occurred in
drafting this document. This document is subject to delivery and technical alterations without warning.

Under no circumstances can SensoPart Industriesensorik GmbH or its subsidiaries be held responsible for any errors,
omissions, or problems in this information, and any incident (losses, injuries or damages) that might result from the use or
miss-use of this document or the software and hardware described therein.

Use of the VISOR® hardware, software, and its components is entirely at your own risk. Should anything prove faulty, you
assume the entire cost of all service, repair or correction.

The provided programs and examples allow using the VISOR ® Robotic in combination with UR CB-series or e-series
robots. They assume a very careful user considering all necessary safety measures. Always conduct a risk assessment
before placing the robot into operation. All example programs are provided for informational purposes only. They must
be custom modified, tested and debugged to work with each unique application.

No part of this document may be reproduced, published, or stored in databases or information retrieval systems in any
form – even in part – nor may illustrations, drawings, or the layout be copied without prior written permission from
SensoPart Industriesensorik GmbH.

© 2020 SensoPart Industriesensorik GmbH


Nägelseestraße 16
D-79288 Gottenheim
Germany

Document History

Version Remarks
1.0 Initial version
2 Update for VISOR® Robotic URCap Version 1.0.8
3 Update for VISOR® Robotic URCap Version 1.2.0
4 Update for VISOR® Robotic URCap Version 1.3.1
5 Update for VISOR® Robotic URCap Version 1.4.0
VISOR® Robotic URCap Step-by-Step

Content
1 Introduction .................................................................................................................................................................................................................................2
1.1 Guide to symbols .........................................................................................................................................................................................................2
1.2 Safety instructions ........................................................................................................................................................................................................2
1.3 Intended use ...................................................................................................................................................................................................................3
1.4 Prerequisites and supported Versions ..............................................................................................................................................................3
2 VISOR® Robotic URCap: Installation Node...............................................................................................................................................................5
2.1 Installation Node: Network ....................................................................................................................................................................................6
2.1.1 Connection Status .......................................................................................................................................................................................................7
2.1.2 Auto Connect ................................................................................................................................................................................................................8
2.2 Installation Node: Jobs ...............................................................................................................................................................................................9
2.2.1 Backing up and Restoring Jobsets from/to the VISOR® ..........................................................................................................................9
2.2.2 Uploading the VISOR® QuickStart Template Jobsets ........................................................................................................................... 11
2.3 Installation Node: Calibration ............................................................................................................................................................................. 11
2.4 Installation Node: Live Image .............................................................................................................................................................................. 12
3 VISOR® Robotic URCap: Program Node “VISOR® Calibrate” ................................................................................................................... 13
3.1 Pre-Requisites: Necessary Settings of the Robot ..................................................................................................................................... 13
3.2 Calibration with the Calibration Plate ............................................................................................................................................................ 14
3.3 Calibration with the Point Pair List .................................................................................................................................................................. 15
3.4 Calibration: Hints and Good Practices........................................................................................................................................................... 21
3.4.1 Requirements to the Optical Set-Up and the Usage of the Calibration Plate ........................................................................ 22
4 VISOR® Robotic URCap: Program Node “VISOR® Pick” .............................................................................................................................. 24
4.1 VISOR® Pick: Program Node and Options ................................................................................................................................................. 25
4.2 VISOR® Pick Method Node / Behavior if no part has been found ................................................................................................ 26
5 VISOR® Robotic URCap: Assignments, Program Node “VISOR® Terminal” ...................................................................................... 27
5.1 Variables, Assignments and Logs....................................................................................................................................................................... 27
5.2 VISOR® Terminal ...................................................................................................................................................................................................... 30
6 SensoConfig: Choosing the Right Detector for Robotics Applications ................................................................................................... 31
6.1 Detecting Objects Using Contour Detector ............................................................................................................................................. 32
6.2 Detecting Objects Using Contour Alignment ........................................................................................................................................... 34
6.2.1 Grip Point Correction – Result Offset .......................................................................................................................................................... 36
6.2.2 Gripping Region ......................................................................................................................................................................................................... 36
6.3 Detecting Objects Using BLOB-Detector ................................................................................................................................................... 38
7 Troubleshooting ..................................................................................................................................................................................................................... 41

1
VISOR® Robotic URCap Version 1.4.0: User Manual

1 Introduction
Where is the part exactly? Which is the best place to grip it? Is there sufficient space to take hold of it or is another part in
the way? The robot needs all this information to be able to pick up a part from a conveyor belt, a tray or vibrating
conveyor. The SensoPart VISOR® vision sensor is a reliable, robust and easy to integrate solution for generating the
necessary information. Such a machine vision solution for identifying objects in an image and sending the information to
the robot offers several chances and advantages for solving robotic applications. Typical applications include locating
objects for pick and place applications or assembly processes of all kinds.
VISOR® is a vision sensor, i.e. it comprises camera, illumination, optics and a computing unit for processing the data and
communicating the results to the robot or control in a small robust housing. A PC or laptop is only necessary for
configuring it. In production, it is a stand-alone device sending data (e.g. object positions in robot coordinates) directly to
the robot. It can communicate via discrete I/O, Ethernet TCP/IP, Profinet, EtherNet/IP, or RS422/RS232.
The VISOR® URCap is a software extension for Universal Robots robots with PolyScope controllers. It allows to easily
connect to a VISOR® vision sensor.

1.1 Guide to symbols

DANGER / WARNING / CAUTION


This symbol indicates potentially hazardous situations, which, if not avoided, could result in death or serious
injury.

Attention
This symbol indicates text passages that must be observed. Non-observance may result in personal injury
or damage to property.

Please note
This symbol highlights useful tips and recommendations as well as information for efficient operation.

1.2 Safety instructions

DANGER
Insufficient safety measures can cause death, injuries and property damage!
The correct installation, connection, and configuration of the robot and the vision sensor as well as
securing the working area of the robot are the responsibility of the user or operator. Please follow the
safety instructions in the robot’s manual.

WARNING
Failure of the sensor can occur and, if the working area of the robot is not properly secured, it can
endanger persons or machines.
Dedicated safety devices must be considered at all times when the vision system is connected to a robot
or machine. Please also follow the safety instructions in the VISOR® user manual.

Attention
The user must have read and understood all of the instructions in the following manual before handling the
VISOR® Robotic ABB library.
Any use of the vision sensor in noncompliance with these warnings is inappropriate and may cause injury
or damage of property.

2
VISOR® Robotic URCap Step-by-Step

1.3 Intended use


The VISOR® Vision-Sensor is an optical sensor featuring a number of different evaluation methods (detectors) and is used
for the non-contact acquisition/identification of objects. The vision sensor is designed for industrial use only and is not
suitable for use outdoors. The VISOR® Robotic URCap provides functions for connecting and controlling VISOR® vision
sensors from the robot program.

This manual is intended to be used during programming and maintenance of a robot vision application involving SensoPart
VISOR® vision sensors. It is made for robot programmers, robot application developers and maintenance technicians. It
offers instructions for the implementation of the SensoPart VISOR® vision sensor with a Universal Robots UR3/UR5/UR10
robot and describes an example project for realizing a simple pick-and-place application with a robot guided by a vision
sensor. It describes how to set up the SensoPart VISOR® Robotic URCap together with a SensoPart “VISOR® Robotic
Advanced” or “VISOR® Allround Professional” vision sensor.

1.4 Prerequisites and supported Versions

VISOR Product VISOR FW URCap Version Universal Robots Universal Robots


CB-series e-series PolyScope
PolyScope Version Version
VISOR® 1st  VISOR® Robotic 1.22.11.1 or 1.2.0 3.5 -
generation Advanced higher 1.2.1 3.7 5.1
 VISOR® Allround 1.3.1 3.8 - 3.11 5.2 - 5.5
Professional
1.4.0 3.12 or higher 5.6 or higher

VISOR®  VISOR® Robotic 2.0.0.2 or 1.3.1 3.8 - 3.11 5.2 - 5.5


new Advanced / higher 1.4.0 3.12 or higher 5.6 or higher
generation Professional
 VISOR® Allround
Professional

Please note
For the VISOR Robotic URCap Version 1.4.0, the robot needs to run with robot firmware PolyScope CB-
series 3.12 or higher or e-series 5.6 or higher with either UR3, UR5, UR10 or UR16 robot arms.

3
VISOR® Robotic URCap Version 1.4.0: User Manual

Updating the URCap from earlier Versions

Please note
Robot programs / PolyScope scripts that have been written using an earlier version of the VISOR®
Robotic URCap cannot be used anymore. The robot program as well as the robot installation file need to
be rewritten.
In many cases some program lines, that are not related to URCap commands (e.g. move commands or
waypoints) can be copied from programs made with the older URCap to the a new program, but there is
no guarantee that programs written with the older version will work.

That means for the installation of the VISOR® Robotic URCap Version 1.4.0 on systems that had previous versions of this
URCap installed:

 Uninstall the previous version of the URCap


 Check the robot firmware and, if necessary update it to polyscope 3.12 / 5.6
 Create a new, blank “robot installation”
 Install the new URCap)
 Create a new robot program

4
VISOR® Robotic URCap Step-by-Step

2 VISOR® Robotic URCap: Installation Node


For a step-by-step description of the installation process please follow the instructions described in the Quickstart manual.

Figure 1 shows the main screen of the VISOR® Robotic URCap installation node. It can be found in the installation screen
of the UR PolyScope software. The installation node allows to set the main parameters of the URCap that need to be set
for various robot programs and can be used to perform some comfort functions like live image, jobset backup etc..

Figure 1: VISOR® Robotic URCap – installation node main screen

The URCap can be enabled and disabled by respectively checking and unchecking the checkbox “Enable URCap”. In order
for this setting to be taken into account after the restart of the robot, the user must save the robot installation (Figure 2).
In the “Installation” tab, click on the “Load/Save” submenu and save the current installation by saving it (“Save” or “Save
As…” button).

5
VISOR® Robotic URCap Version 1.4.0: User Manual

Figure 2: Saving the Robot Installation

2.1 Installation Node: Network


The Network tab provides the important functionality of connecting to the VISOR ®, changing the IP settings of the
VISOR® and activating the auto-connect function.

Figure 3: Installation node: Network

6
VISOR® Robotic URCap Step-by-Step

2.1.1 Connection Status


The URCap Status led shows the status of the URCap software. If it is not “green”, please restart the robot, check your
installation settings or re-install the URCap.

The Device Connected led shows the connection status. If a VISOR ® with valid URCap compatible jobset is connected to
the robot, its status is “green”. Only in this case the robot program can be run.

Only if the connection status is “green”, the robot


program can be run.

If the connection status is “yellow”, the device is


connected, but the active job running on the
VISOR is not valid (Ethernet connection not
active).

If the connection status is “yellow”, the device is


connected, but the job running on the VISOR® is
not valid.
Please check the active job in SensoConfig:

 In a valid job, the ethernet connection is


enabled in the output setting of the
VISOR® software.
 Ports are set to IN:2006, OUT:2005

If you are using the VISOR® Robotic URCap jobset


template, these settings should be active by default.

7
VISOR® Robotic URCap Version 1.4.0: User Manual

If the device has been selected from the list, but has not yet been connected to, its network settings can be changed:

 Sensor Name
 IP-address
 Subnet-mask
 Settings defined by DHCP server (if available in the network).

Figure 4: Changing the network settings

2.1.2 Auto Connect


By selecting “Auto connect” the URCap will try to connect to the VISOR ® at start-up of the robot. When having a
”Default Program” running on the robot a start up, this function must be used.

Please note
After selecting the “Auto connect” function, the “robot installation” has to be saved (cf. chapter 2, Figure
2). This rule goes for any changes in the installation node of the VISOR® Robotic URCap.

Figure 5: Activating the automatic connection at start-up


When using this function, please make sure the sensor is powered on and in “ready state” 45s after starting up the robot.

8
VISOR® Robotic URCap Step-by-Step

2.2 Installation Node: Jobs


A job contains all the settings and parameters required to carry out a certain inspection task, it can be considered as the
recipe or product file of the vision sensor. The VISOR ® can store up to 255 different jobs in a jobset. The jobs have to be
created with the VISOR® software (SensoConfig) and are stored on the VISOR® hardware.

The installation node “Jobs” of the VISOR® Robotic URCap (Figure 6) provides information on the job that is currently
active on the VISOR® and it allows to the change the currently active job. Furthermore, backups can be performed and
the VISOR® Robotic job templates can be played onto the VISOR®.

The VISOR® Robotic URCap updates the list of active jobs each time the VISOR® is connected. If you have added new
jobs in SensoConfig while the URCap was connected to the VISOR ®, please press “Retrieve job list” before continuing
programming the URCap. This way, the new jobs can also be selected in the program nodes.

Figure 6: Job management using the “Jobs” installation node.

2.2.1 Backing up and Restoring Jobsets from/to the VISOR®


The VISOR® Robotic URCap can be used for backing up the VISOR® jobsets on the robot controller and for uploading
backup jobsets to the VISOR®. This simplifies the replacement of cameras, offers more safety and flexibility. Furthermore,
this function can be used to upload the VISOR® Robotic URCap QuickStart job template. This facilitates setting up new
tasks even more.

9
VISOR® Robotic URCap Version 1.4.0: User Manual

1. Enter the name for the backup jobset by clicking


into the text field.

2. Enter the name for the jobset via the on-screen


keyboard.

Please always confirm your input by pressing


“Submit”.

3. Press “Backup Jobset” for backing up the current


jobset from the VISOR®.

For uploading a jobset to the VISOR®, press the button “Upload jobset” and select the jobset from the list.

10
VISOR® Robotic URCap Step-by-Step

Figure 7: Uploading jobsets to the VISOR®

The jobsets are stored on the VISOR® depending on the firmware version and hardware mode (V20 Allround
Professional, V20 Robotic etc.). If you cannot see a jobset you previously stored, please check the following points:

 Are you using the same VISOR ® model (V10 / V20; monochrome / color; Robotics / Allround)?
 Are you using the same firmware version of the VISOR® as when you saved the jobset on the robot?
Please note that these backup jobsets are solely for made for making quick and simple backups on the robot controller.
We still recommend using SensoConfig for storing the jobsets after creation. The SensoConfig jobset-files can be easily
converted to other firmware versions or product models.

2.2.2 Uploading the VISOR® QuickStart Template Jobsets


With the installation of the URCap, for each VISOR® model, a QuickStart jobset template is installed on the robot
controller:

 URCap_Job-Template_Contour_Cal_Plate_200mm:
This Jobset templates contains a job template with the settings described in the chapter “QuickStart”.

2.3 Installation Node: Calibration


The calibration between camera and robot coordinate systems is being performed using the program node “VISOR®
Calibrate“. The calibration parameters are stored on the VISOR ® within the jobs and each job can have a different
calibration setting. This has concept has the following advantages:

 An automatic (re-)calibration can be performed in the robot program, e.g. using the point-pair-list calibration
method. The necessary robot program could e.g. be stored in a subroutine of the main robot program and be
started based on an event sent by the machine control to the robot.
 When the camera is mounted to the robot arm, for each image acquisition pose and working area, a different
calibration is necessary. By having the calibration saved in the VISOR ®s job, such applications can be easily solved.

In the tab “Calibration” of the installation node (Figure 8), the calibration settings can be copied from the active job to one
special or all other jobs on the VISOR®. This way, the actual calibration only has to be performed once.

11
VISOR® Robotic URCap Version 1.4.0: User Manual

Figure 8: Installation node, tab "Calibration"

2.4 Installation Node: Live Image


The live image function of the installation node displays the live image from the VISOR ® (Figure 9). The yellow box
displays the search region of the detector. The green box and the cross resemble the detected part and its center.

Figure 9: Live Image from the VISOR®

For displaying live images, the VISOR® has to be in “run mode”, i.e. it cannot be worked in SensoConfig in parallel. The
image transmission is limited by the bandwidth of the connection and other factors. Thus, small delays cannot be
completely avoided.

12
VISOR® Robotic URCap Step-by-Step

3 VISOR® Robotic URCap: Program Node “VISOR® Calibrate”


The Calibration of the VISOR® is necessary in order to transform the image coordinates (pixel, with origin in the upper
left corner of the image) into robot coordinates (e.g. millimeter with respect to the robot coordinate system). When
activated all position and distance data directly calculated in the selected unit. Furthermore, the error induced by
perspective and lens distortion is corrected and all result coordinates are calculated and transmitted to the robot in robot
coordinates. It is a basic step necessary for all pick-and-place applications and needs to be performed for each set-up.
Once the calibration has been performed the calibration values are stored in the Jobfile on the VISOR ®. With the tab
“Calibration” of the installation node (Figure 8), the calibration settings can be copied from the active job to one special or
all other jobs on the VISOR®. This way, the actual calibration only has to be performed once if the optical set-up does not
change between jobs.

The VISOR® Robotic offers two calibration methods:

Point-Pair-List
 Robot picks a part, places it at several positions inside the field of view and sends the world coordinates to the
VISOR® for generating the correlation between camera and world coordinates.
 Applications:
calibration of large fields of view, automatic (re-) calibration of the system.

Calibration Plate (Robotics):


 User places calibration plate into the field of view of the camera and moves robot to the four fiducial coordinates.
 Applications:
Easy to use, general purpose calibration method offering high resolution.

For details on the settings, please refer to chapter 8.1.5 of the VISOR ® user manual. For getting started quickly, we
recommend using the easier to use calibration plate method.

3.1 Pre-Requisites: Necessary Settings of the Robot


The VISOR® URCap uses the tool center point (TCP) coordinates for estimating all (x, y) positions. Therefore, please
make sure, the TCP of the system has been correctly set up in the installation node (see Figure 10) or in the URCap of
your gripper respectively. Please refer to your gripper or robot manual for more information.

Figure 10: Set-up of the tool center point in the UR installation.

13
VISOR® Robotic URCap Version 1.4.0: User Manual

3.2 Calibration with the Calibration Plate


The calibration method “Calibration plate (Robotics)” is used to determine absolute positions in robot world coordinates
(e.g., mm). The scaling in x and y is separated. The tilt of the sensor towards the field of view, and the lens distortion are
corrected. The transformation into the absolute coordinate system of the robot is made possible with the “Calibration
plate (Robotics)”, by teaching the position of the four fiducials on the calibration plate in robot coordinates to the camera.

SensoPart offers four different calibration plates, width: 50 mm, 100mm, 200mm, 500mm. They can be ordered from
SensoPart as aluminum-composite plates.

Please note
A pdf-file containing the calibration pattern can be found in the .pdf files that come with the VISOR ®
installation “[VISOR® Installation Directory]\Documentation\Calibrationplates”.
When printing, please print the files with the printer set to the setting “actual size”.

Figure 11 displays the process for the calibration with the calibration plate. The user places calibration plate into the field
of view of the camera and moves robot to the four fiducial coordinates. Then the VISOR® takes an image of the
calibration pattern in order to actually perform the calibration. Using the VISOR® Robotic URCap the user is guided
through the process step-by-step.

Figure 11: Calibration Plate Calibration Process

The step-by-step description for setting up the calibration process with the programming node “VISOR® Calibrate” and
the “Calibration Plate” can be found in chapter 3 of the quickstart manual.

The VISOR® program nodes “Calibrate with the Calibrations Plate” follows the PolyScope programming principles and
guides you through the process step-by-step (Figure 12).

The upper part of the routine defines the parameters and boundary conditions for the process. These nodes require your
input, action or decision [A].

 These nodes are marked “yellow” at the beginning.


 Follow the instructions for each node. The node becomes “green”, i.e. ready, when all necessary inputs and actions
have been made.
 Step through the nodes by pressing “next” or selecting the node.

The lower part of the routine [C] contains the “Calibration Plate Process”. It will be running automatically if all necessary
inputs in part [A] have been made.

 Here you can integrate your own commands at any position, e.g. additional move commands or gripper controls.

14
VISOR® Robotic URCap Step-by-Step

 The process can be started once all nodes are marked green

Figure 12: Calibration with the calibration plate

3.3 Calibration with the Point Pair List


The calibration with the point pair list allows the fully automatic (re-) calibration or validation of the job without any user
interaction. The robot picks a calibration part, places it within the field of view at 9 different poses. These poses are sent
to the vision sensor and used for the calibration from pixel to robot coordinates and for the correction of errors induced
to perspective and lens distortation.

In order to run the process, the 9 calibration poses have to be defined as well as a start/exit pose, the image acquisition
pose, an intermediate pose that allows to safely reach the calibration poses and the object pose, from where the
reference object is taken. Figure 13 graphically displays these poses that have to be defined by the user and the robot
movements of the point pair list calibration process. The robot program starts at a start/exit pose. From here, it moves to
a pose where a reference object can be picked from. This object is placed at nine calibration poses that have been defined
by the user. At each of these poses, the VISOR® takes an image and estimates the position of the object in image
coordinates. This way in the VISOR® software a list, the point pair list, is built up. This list contains two pairs of coordinates
(x, y)world and (x, y)pixel. By correlating these values, the VISOR® is calibrated.

15
VISOR® Robotic URCap Version 1.4.0: User Manual

Figure 13: Point pair list calibration process

Figure 14 explains the structure of the calibration with the point pair list. The VISOR® program nodes “Calibrate with the
Point Pair List” follow the PolyScope programming principles and guides you through the process step-by-step. The upper
part of the routine defines the parameters and boundary conditions for the process. These nodes require your input,
action or decision [A].

 These nodes are marked “yellow” at the beginning.


 Follow the instructions for each node. The node becomes “green“, i.e. ready, when all necessary inputs and actions
have been made.
 Step through the nodes by pressing “next“ or selecting the node.
The lower part of the routine [C] contains the “point pair list process”. It will be running automatically if all necessary
inputs in part [A] have been made.

 The gripper controls for picking and releasing the reference part need to be added here, before running the
process.
 Furthermore, you can integrate your own commands, e.g. additional move commands for more precisely defining
the movement of the robot.
 The process can be started once all nodes are marked green

16
VISOR® Robotic URCap Step-by-Step

Figure 14: Calibration with the Point Pair List


When mounting the VISOR® to the robot arm, the image acquisition pose has to be the same pose as in the VISOR ® pick
process. When using a static mount, the image acquisition pose has to be a pose that moves the robot arm out of sight of
the camera.

1. The VISOR® URCap programming nodes can be


inserted into the program by switching to the screen
“Structure” and selecting the tab “URCaps”

For implementing a calibration routine, we insert a


“VISOR® Calibrate” into the program.

2. By switching back to the “Command”-Tab, we see


the VISOR® Calibrate main node, providing basic
information.
By stepping to the next node, the calibration method
can be chosen and the structure of the process will
be defined.

17
VISOR® Robotic URCap Version 1.4.0: User Manual

Please note
Please make sure, the calibration parameters of
the jobset on the VISOR® have been
configured correctly and accordingly to the
choices made in the URCap.
 Calibration method (Calibration plate /
Point pair list)
 calibration plate type (50 mm, 100 mm
200 mm, 500 mm)
 z-offset
 focal length

3. Select the calibration method


“Point Pair List”
Now the Calibration with point pair list process is set
up.

1. Please select the “calibration job”:


 The calibration job is used to find the reference
object.

2. Please select the “detection job”:


 Only this job is being calibrated.

The calibration values can be copied from this job to


other jobs by using the “Calibration” function in the
installation node.

18
VISOR® Robotic URCap Step-by-Step

In the folder “Pose Setup”, the four initial robot poses for
the calibration have to be defined.

3. Press “Next” in order to proceed to the next step.

4. Please define the “Start/Exit” Pose:


 The process starts and ends at this pose.
 Move the robot to the intended pose and press
“Save Pose”.

5. Please define the “Object” Pose


 This is the pose where the reference object is
being taken from.
 Move the robot-TCP to the intended pose and
press “Save Pose”.

Please note
The robot moves in a direct linear movement
from the start pose to this pose. If an approach
pose above the object would be necessary
(e.g. because a bracket gripper is applied), you
can add this waypoint in the Point Pair List
process.

6. Please move the robot TCP to the image acquisition


pose. This is the pose from where the images are
being taken:
 The VISOR® has to have unobstructed view of
the calibration target and working area
 The imaging parameters have to be set up

The image acquisition pose has to be the same as the


image acquisition pose used in the VISOR® Pick process.
We recommend storing this pose as a feature variable in
the installation of the robot.

19
VISOR® Robotic URCap Version 1.4.0: User Manual

7. Please move the robot TCP to the intermediate


pose. It must be possible to reach all the calibration
poses safely from this pose.

In the point pair list process”, the z-position of the


intermediate pose, defines the z-position of the
automatically calculated approach poses to the 9
calibration poses.

8. In the folder “Calibration Points”, the nine calibration


poses have to be defined.

9. Press “Next” in order to proceed to the next step.

10. Please move the robot TCP to the calibration pose,


where the reference object will be placed.
 Make sure, the reference object is within the field-
of-view of the VISOR®.

11. Repeat step 11 for all 9 calibration poses


 The calibration poses should be evenly distributed
within the field of view of the VISOR®.
 It must be possible to safely reach all poses.

20
VISOR® Robotic URCap Step-by-Step

12. Once all the necessary poses have been set up, the
point pair list process runs automatically.
 Within the process additional move commands
and waypoints can be added if necessary.

13. The gripper actions, need to be set in order to pick


and place the reference part.
They are marked by comment nodes.

14. After a successful calibration, the precision of the


calibration can be checked in “Calibrate-Node”.

3.4 Calibration: Hints and Good Practices


The calibration only needs to be run once at set-up time of the program. Once the job is calibrated and the optical set-up
does not change, the calibration does not need to be re-run each time the program starts. Therefore, we propose two
strategies for calibration:

Dedicated calibration robot program:

 Make one robot program for the calibration and save it on the robot.
 If the camera needs to be re-calibrated, you can load the program and run it.

21
VISOR® Robotic URCap Version 1.4.0: User Manual

Suppressing the calibration node:

 Alternatively, put the calibration node at the beginning of the program.

Suppress the calibration node after having it run once.

If you are using other jobs with the same optical set-up, the calibration can be copied from the calibrated job to all other
or some jobs by using the function provided in the URCap’s installation node.

Attention
The image acquisition pose has to be the same as the image acquisition pose used in the VISOR® Pick
process (especially if the camera is mounted to the robot arm).

We therefore recommend storing this pose either as a feature variable in the installation of the robot (Figure 15 a.)) or as
a waypoint in the program (Figure 15 b.))

Figure 15: Storing the important poses as features in the installation node.

3.4.1 Requirements to the Optical Set-Up and the Usage of the Calibration Plate
If you have problems detecting the calibration plate (calibration fails) or the deviation values are too large (typical values <
2px), please consider VISOR® manual, chapter 8.1.5.4 on how to optimize the calibration using calibration plates. If you
notice very large deviations for the fiducial

When using the calibration plate, please note the following boundary conditions:

 The calibration plate must be clean and plain.


 The plate must be illuminated homogeneously over the entire field of view and must not be overexposed. The
bright regions should have a gray value of at least 100 and below 255. The contrast between bright and dark
regions should be at least 100 gray values. That means, the image must not be under- or overexposed.
 Calibration works correctly only if the focus and position of the sensor does not change in relation to the
measurement plane.

22
VISOR® Robotic URCap Step-by-Step

 To perform a calibration, at least one search pattern must be found. For small calibration patterns, it may be
necessary to use at least two search patterns.
 The calibration pattern should cover the entire field of view of the VISOR® vision sensor. For a successful, precise
calibration it’s not necessary that the entire calibration plate is visible.
That means the fiducials should not be seen in the image – only the calibration pattern!

For detailed information, please refer to the VISOR® user manual (chapter 8.1.5.4.)

23
VISOR® Robotic URCap Version 1.4.0: User Manual

4 VISOR® Robotic URCap: Program Node “VISOR® Pick”


The programming node VISOR® Pick allows the user to implement a standard picking process. The standard picking
operation that the URCap implements consists in the detection of a known product and by its picking by the UR robot.

Figure 16 shows the poses the VISOR® Pick Part process moves to:

 The process starts by moving the robot to the “Acquisition Pose”


o The robot sends a trigger signal to the VISOR® for detecting the object.
 If the object has been detected:
o The robot moves to “Approach Pose”: This pose is automatically calculated. It is above the object in a
distance defined in the approach offset specified in the node “VISOR® Pick method”.
o From the approach pose the robot moves down to the object pose. This position takes into account the
“Pos X”, “Pos Y” and Angle values returned by the VISOR® sensor. The z-component of the pose is
defined by the “gripping pose”.
o The object will now be picked, if a gripper command has been inserted in the program.
o The robot moves back to the approach and the intermediate pose
 If the object has not been detected, depending on the setting several options are available:
o Pass (default): the program continues its execution to the next programming node.
o Try forever: The product detection is triggered forever until the product is detected.
o Try again until [value specified by the user] sec(s) then stop: the detection of the product is triggered for
the duration specified by the user (in second(s)). The program continues if the product is detected before
the end of the specified duration. If the product is not detected at the end of the specified duration, the
program stops.
o Popup and halt: a popup appears with a message specified by the user and the program stops.

Figure 16: VISOR® Pick Process

Please Note:

 The TCP of the robot must be precisely defined.


 This is a template and some process may need additional poses to smooth the different approaches and robot
movements. The user can add any kind of nodes within the templates to do so.
 For a loaded program, the selected job is retrieved according to its name and not its index. It means that if the job
list has changed from the moment the program has been saved to the moment it is loaded, but if it includes
anyway a job with the same name, the latter will be selected. Otherwise, the user must explicitly choose a new
job.
 The gripping and releasing operations of the product have to be added manually by the user.
 This operation must be done in the same frame (coordinate system) as the one used for the calibration process.

The necessary settings of the VISOR® are described in chapter 4 of the quickstart manual.

24
VISOR® Robotic URCap Step-by-Step

4.1 VISOR® Pick: Program Node and Options


Figure 17 explains the structure of the calibration with the point pair list. The VISOR® program node “VISOR® Pick”
follows the PolyScope programming principles and guides you through the process step-by-step. The upper part of the
routine defines the parameters and boundary conditions for the process. These nodes require your input, action or
decision [A].

 These nodes are marked “yellow” at the beginning.


 Follow the instructions for each node. The node becomes “green”, i.e. ready, when all necessary inputs and actions
have been made.
 Step through the nodes by pressing “next” or selecting the node.

The lower part of the routine [C] contains the “point pair list process”. It will be running automatically if all necessary
inputs in part [A] have been made.

 The process can be started once all nodes are marked green
 The gripper controls for picking and releasing the reference part need to be added here, before running the
process.
 Furthermore, you can integrate your own commands, e.g. additional move commands for more precisely defining
the movement of the robot.
 There are two branches “Object found” or “Object not found”, that are executed if a part has been found by the
camera or not. The branches can be filled with the respective actions

Figure 17: Structure of the VISOR® Pick node

A step-by-step info on how to set up the VISOR® Pick process for a simple Pick-and-Place task can be found in chapter
3.3.4 of the quickstart manual.

25
VISOR® Robotic URCap Version 1.4.0: User Manual

4.2 VISOR® Pick Method Node / Behavior if no part has been found
When clicking on the VISOR® Pick Method child node, the user can specify several options:

 A dedicated dropdown indicates the job the picking process will be executed within. See the configuration for the
standard job configuration.
 The Choose frame dropdown will be described here.
 Once the product detected, the robot goes to position returned by the VISOR® sensor. In order to smooth the
approach, the robot goes first to an intermediate pose before reaching the product itself. This intermediate pose is
located on top of the detected product relatively to an offset. This Z Offset can be specified by the user in mm.

The user can specify the behavior of the program if the object is not detected, by clicking on the related behavior:

 By clicking on the Change method button, the current template is deleted and the user can then select between
the picking process. An additional popup appears in order to confirm this choice

26
VISOR® Robotic URCap Step-by-Step

5 VISOR® Robotic URCap: Assignments, Program Node “VISOR® Terminal”


Thanks to global variables, assignments for accessing arbitrary result telegrams from the VISOR ® and the VISOR® Terminal
node for sending Ethernet Requests to the VISOR®, the VISOR® URCap allows the experienced user to make use of the
full flexibility of the VISOR® for solving applications that are more advanced.

5.1 Variables, Assignments and Logs


All the programming nodes, when being executed, update 7 global variables:

 visor_request_sending_success: indicates if the URCap has correctly sent the request to the VISOR® vision
sensor.
 visor_request_success: indicates if the VISOR® vision sensor process requested has been successful.
 visor_response: response of the VISOR® vision sensor triggered by the sent request (port 2006 or port 1998).
 visor_output_data: output of the VISOR® vision sensor triggered by the sent request (port 2005). The output
format is the one specified in the telegram tab of the VISOR® SensoConfig software (see the Standard
configuration of a job). No output data is generated if the request sent to the VISOR® vision sensor failed.
 visor_detector_success: indicates if the vision process, dedicated to the detection of a product, has been
successful. This variable is linked to the Overall result flag in the telegram definition (see the Standard configuration
of a job).
 visor_pose: specifies in a UR pose format the 2D position of the detected object, as well as the gripping angle.
These variables are linked to the Pos. X, Pos. Y and Angle values in the telegram definition (see the Standard
configuration of a job). The returned pose has the following format:
 visor_pose := p[Pos. X / 1000000, Pos.Y / 1000000, 0, 0, 0, d2r(Angle / 1000)]
 visor_status_msg: indicates the status description of the communication with the VISOR® vision sensor following
the sent request.
All poses that have been taught two the system are being stored as variables and can be accessed using assignment nodes.

27
VISOR® Robotic URCap Version 1.4.0: User Manual

Figure 18: Tab Variables displaying the global variables defined by the VISOR® Robotic URCap
Six of these variables can be accessed in UR program, thanks to the dedicated Assignment programming node and the
available functions implemented by the URCap:

 isVisorRequestSuccessful(): access to the visor_request_success variable.


 getVisorResponse(): access to the visor_response variable.
 getVisorOutputData(): access to the visor_output_data variable.
 isVisorDetectorSuccessful(): access to the visor_detector_success variable.
 getVisorPose(): access to the visor_pose variable.
 getVisorStatusMsg(): access to the visor_status_msg variable.

Figure 19: Adding Assignments

If the user has specified additional data in the payload of the job (see the Standard configuration of a job), these data can
be retrieved thanks to two dedicated functions implemented by the URCap. The use of these functions suppose that the
user knows in which order different payload variables are organized, as well as the types of these data.

The two functions are:

 getVisorStringOutput(index):
access to the payload data, as a string, at index index. The indices start at 0 and the function returns the “Overall
result”, “Pos X”, “Pos Y” and “Angle” variables if the index index is respectively equal to 0, 1, 2 and 3.
 getVisorIntegerOutput(index):
access to the payload data, as an integer, at index index. The function returns an array of 2 integer values. The first
value can be seen as a Boolean, specifying if the value conversion into an integer is valid. If the conversion is valid,
the second value is the converted integer. The indices start at 0 and the function returns the “Overall result”, “Pos
X”, “Pos Y” and “Angle” variables if the index index is respectively equal to 0, 1, 2 and 3.

28
VISOR® Robotic URCap Step-by-Step

Figure 20: Accessing the global variables using assignment functions

During the execution of a program implying the use of the URCap VISOR® Robotic, every command request to the
VISOR® vision sensor and all the response data from it are logged in the Log tab of the UR controller.

Figure 21: Logs

29
VISOR® Robotic URCap Version 1.4.0: User Manual

5.2 VISOR® Terminal


The programming node VISOR® Terminal allows the user to send any specified request to the VISOR ® vision sensor
(Figure 22). When clicking on the VISOR® Terminal node, the user can specify the request he or she wants to send to the
VISOR® vision sensor and test it, by clicking on the Test button. The response and the output data received from the
VISOR® vision sensor are displayed in the interface. The job taken into account when sending the specified request is the
active one selected in the Job management installation node.

Figure 22: VISOR® Terminal

30
VISOR® Robotic URCap Step-by-Step

6 SensoConfig: Choosing the Right Detector for Robotics Applications


The easiest way for getting started with the VISOR® and a robot application will be in using the detector “Contour” for
locating the parts as shown in the VISOR® UR QuickStart Template. However, the VISOR® offers several methods for
locating parts in the image. It thus offers great flexibility for solving a great variety applications. This chapter cannot replace
the VISOR® user manuals description on how to set up and use the various detectors, but shall only give a rough
overview on which detectors can be used and for which application they can be used.

Basically the VISOR® can locate objects with a dedicated detector or with a position alignment.

Detectors:

 Detectors are the basic image processing functions of the VISOR ®. In each job, up to 255 detectors can be
executed, but the VISOR® Robotic URCap only accepts coordinates for one part per image.
 For robot applications, a detector that returns calibrated coordinates is necessary. The detectors returning
calibrated coordinates are: Contour, Pattern matching, BLOB, caliper
 If no additional checks on the part are necessary, using a detector will be the standard method for locating an
object.

Figure 23: VISOR® Detectors with that return calibrated positions

Position Alignment:

 The position alignment locates the object in the image and this position can be used by the robot.
 Additionally it creates a new coordinate system (the so called alignment frame):
o The origin of the alignment frame is the position of the object.
o The positions of all detectors related to the position alignment will be moved accordingly
 Applications:
o The contour position alignment can be used for locating the part and for checking if this part can be safely
picked.

31
VISOR® Robotic URCap Version 1.4.0: User Manual

o It can also be used for performing further checks on the object to be picked, e.g. for sorting applications or
for quality checks
 Methods:
o There are three methods for position alignment: edge, pattern matching and contour matching.
o The contour alignment comes with a special “gripping space” function, that

The following, most often applied detectors and alignments will be shortly described in the following sections of this
chapter. For a detailed description of these detectors, please refer to the VISOR ® manual:

 Position Alignment: Contour


 Detector: Contour
 Detector: BLOB

6.1 Detecting Objects Using Contour Detector


The easiest way to extract an object position is by using the contour detector or the contour alignment. Both methods
use the same detection algorithm. The contour detector locates an object in the image by its contour (edges)

1. Select tab “Detector

2. Press “New”

3. Select detector “Contour”

4. The yellow box defines the search regions. Objects


within this region can be detected. The smaller this
region is, the faster the detection will be.

5. The green / red box defines the region teaching the


object. Move it around the object to be taught to
the VISOR®.

32
VISOR® Robotic URCap Step-by-Step

6. All boxes can be rotated by moving the little black


triangle

7. Adjust the angle range in which the detector locates


the parts.
Please note: here the angle is defined within the
image coordinate system (0° in x-direction)

8. If necessary, adjust the scale range (e.g. if you are


looking at the object plane from an angle)

Please note: For optimizing speed, both ranges should be


selected as small as necessary.

9. The robustness of the detection can be increased, if


only contours are taught that are really relevant.
Using the “Edit contour” function, countours
generated e.g. by reflections can be removed
manually.

For further information on optimizing the detection,


please refer to the VISOR® manual.

10. The little lock symbol, allows to lock the contour


taught to the detector, so it cannot be accidentally
changed by moving the boxes.

33
VISOR® Robotic URCap Version 1.4.0: User Manual

11. Now the detector is ready

The green box shows the position of a detected


object.
The red box displays the position where the object
has been trained.

The contour detector offers the possibility to add a result offset, too (cf. chapter 6.2.1).

Figure 24 shows the telegram settings for transmitting position and angle to the robot. The features can be selected in
output setup on the payload tab. Header, trailer and separator have to be set this way in order to be compatible to the
URCap (as described earlier).

Attention
When deleting a detector, the telegram settings will be deleted, too.

Figure 24: Telegram settings for transmitting position and angle to the robot

6.2 Detecting Objects Using Contour Alignment


Contour detector and contour alignment use the same detection algorithm. They locate an object in the image by its
contour (edges). The contour alignment additionally defines a new coordinate system allowing to have other detectors
aligned to this object for further checks.

This function can be used for several purposes:

 If a part is to be gripped on its outer contours, a further detector can be used to check the available space around
the part.
 In sorting applications, the part can be located with the contour alignment and the sorting criteria can be checked
with the consecutive detectors (e.g. has the object the right label, color mark etc.)

34
VISOR® Robotic URCap Step-by-Step

There can be only one contour position alignment per job. When the part moves on subsequent triggers, the contour
alignment will move all detectors so that they are on the configured position on the part. There can be up to 255
detectors per job.

Figure 25 shows how to use the contour alignment for detecting an object position. Just move the green box around the
object contour to be detected. The yellow box defines the search region. An object is being taken into account if its
center is within the search region.

Figure 25: Setting up the contour alignment for detecting an object position

Figure 26 shows the telegram settings for transmitting position and angle to the URCap.

35
VISOR® Robotic URCap Version 1.4.0: User Manual

Figure 26: Telegram settings for transmitting position and angle to the robot

6.2.1 Grip Point Correction – Result Offset


By default, the position sent to the robot is defined by the center of the green box marking the detected object. The
optional “Grip point correction” function can be used to modify an automatically determined center point and take a
different grip position into account, e.g. handle at the side of an object. Figure 27 displays this function (“result offset”). It
allows the flexible definition an offset relative to the detected position. The result values will be corrected accordingly.

Origin of the
coordinate system of
the contour alignment
Resulting position (center of the green
sent to the robot, can region of interest)
be freely defined,
when setting up the
result offset correction.

Figure 27: Result offset for freely defining the object position relative to the detected position

6.2.2 Gripping Region


Robots grip objects, e.g. with a twin-jaw gripper, on the outer contour of the objects. Gripping with the robot may not be
possible if the objects touch or overlap. The VISOR® gripping space function can be used to check whether the gripping
positions on the object are available in the required size. The position of the first found object is output, in which its
tracking detectors (gripping regions) are OK (according to the logical links in the overall result). The gripping space
function is available for contour alignment.

The contour alignment identifies those objects as candidates whose contour matches the taught-in contour.

36
VISOR® Robotic URCap Step-by-Step

These candidates will be sorted and the first one in the list will be used. By default they are sorted by the score-value, i.e.
the level of accordance between taught-in contour and the contour of the candidates.

However, when using the gripping space function, further criteria can be applied allowing to only return objects that
actually can be picked. The sorting takes place according to the values of “Sorting criteria” and “Sorting order” set in the
“Gripping space” tab (Figure 28).

Figure 28: Using the gripping space function


According to this order, the candidates will be checked to make sure that the tracked detectors that are assigned by
alignment (e.g. clearance check) all comply. This happens under consideration of the logical links in the overall job result
(Figure 29). In the “Digital output” tab of the “Output” setup, logical links can be used to evaluate the objects. For
example, free spaces for different gripping positions can be defined here. The gripping positions X-X and Y-Y are possible
for the object shown in the following figure. Of these gripping possibilities, only those that are necessary for one grip can
then be checked for “free”. The position data of the first object that meets all these criteria are output and the search is
terminated at this point.

37
VISOR® Robotic URCap Version 1.4.0: User Manual

Figure 29: Logical links for setting up the gripping space check.

6.3 Detecting Objects Using BLOB-Detector


The BLOB detector is a basic function of machine vision for evaluation of connected areas respectively objects in an
image. These single objects are distinguished by simple features like e.g. area, width or height. Their position (e.g. center of
gravity) and angle can be extracted and used for robot guidance. This detector is especially useful, when the shape of the
objects is not known a priori.

The set-up of this detector follows two steps. In step one object and background are being distinguished from each other.
Step 2 is for selecting the features to distinguish the BLOBs.

38
VISOR® Robotic URCap Step-by-Step

BLOB Step 1: „Binarization“


Distinguishing between object and background

Figure 30: Distinguishing between object and background

BLOB Step 2: „Features“


Selecting the features to distinguish the BLOBs

Figure 31: Selecting BLOB feature

Figure 32 shows the telegram settings for transmitting position and angle to the robot. The features can be selected in the
payload tab. Header, trailer and separator have to be set this way in order to be compatible to the URCap.

39
VISOR® Robotic URCap Version 1.4.0: User Manual

Please note
The BLOB detector can be used in order to transmit positions of more than one object and it is in this
setting (“No. of results = 0”) by default.
However, for communication with the URCap only one result may be transmitted. Thus, the “No. of
results” has to be set to “1”.

Figure 32: Telegram settings for transmitting position and angle to the robot

40
VISOR® Robotic URCap Step-by-Step

7 Troubleshooting
Problem Possible reason Solution

VISOR® Software: If the calibration is active in the If you cannot set up your detectors, please check the
All detector results are “fail” / job settings of the VISOR®, but calibration settings.
Detectors cannot be set up the calibration is not valid, all The VISOR® Robotic template jobset comes with a
detector results are job where the calibration is active and valid i.e. green
automatically “fail”. status (although of course the actual calibration needs
to be performed using the robot.
If you start the jobset from the scratch, please set up
the detectors first and select the calibration methods
as a last step before starting the sensor.

The VISOR® does not send any The detector in the QuickStart When using a different / new detector, the telegram
data to the robot when triggered jobset has been deleted and a settings need to be set up again
by visor_trig() or visor_terminal() new one added.
The part is being moved slightly TCP settings: Robot TCP is not Check
by gripper properly set up:  the TCP settings (robot)
 During picking  the z-offset (VISOR® - SensoConfig calibration
 During camera calibration parameters, see VISOR® help/manual)
Calibration z-offset is not  the detector settings (VISOR® - SensoConfig setup
correctly entered. step Detector / tab “result offset”)
Detector is not correctly set up: center point, result offset
center point / result offset is not
correctly defined
Sensor cannot be found /  Electrical and network  Is the VISOR® connected to power (green LED
Difficulties with auto-connection connection on its backside is on)?
 Network settings  Are the Ethernet cables connected?
 Are the network settings of the robot (see robot
manual) correct?
 Are the network settings of the VISOR® correct?
Can it be found with the SensoFind software?
 Are both components part of the same IP address
space?
 The gateway of the robot has to be set (in
Polyscope “Setup Robot” > “Network” tab): The
field should not be left blank. If there is a gateway
in the system, its IP-address has to be entered. If
not, the ip address of the robot should be written
into the field.
The robot is physically connected Is the VISOR® connected to power (green LED on
to a sensor, but it cannot be its backside is on)?
found Are the Ethernet cables connected?
Are the network settings of the robot (see robot
manual) correct?
Are the network settings of the VISOR® correct?
Can it be found with the SensoFind software?
Are both components part of the same IP address
space?

41
VISOR® Robotic URCap Version 1.4.0: User Manual

The connection status is “yellow”. No valid jobset is installed on Install valid URCap quickstart template (wich should
the VISOR®; Ethernet is not contain the correct settings by default) or set up the
enabled. interfaces in SensoConfig > setup step Output > tab
Interfaces correctly:
 Ethernet enabled
 Ports are set to IN:2006, OUT:2005
Jobset that was stored on the Are you using the same
VISOR® previously is not  VISOR® model (V10 / V20; monochrome / color;
displayed. Robotics / Allround)?
 Firmware version of the VISOR® as when you
saved the jobset on the robot?
The robot moves to a wrong The result offset is not defined In the pose setup, the offset only affects Z direction
gripping position correctly. and the angle. Please set a result offset for X and Y
(see chapter 6.2.1)

42
VISOR® Robotic URCap Step-by-Step

Notes

43
Note
W e re s e rv e t he ri g ht t o m od i f y , a d d o r re m o v e c o nt e nt s of t hi s
d oc u m e nt a t a n y t i m e w i t h o ut p r i o r n ot i ce .
P a s s i ng a nd re p rod u ct i o n of t h i s d oc u m e nt , us e a nd d i s cl os ur e of i t s
co nt e nt a r e p r oh i b i t e d u nl e s s e x p r e s s l y p e rm i t t e d .

Ge rmany
Se ns oP a rt
I nd us t ri e s e ns or i k G m b H
7 9 2 8 8 G ot t e n he i m
Te l .: + 4 9 7 6 6 5 9 4 7 6 9 - 0
06814830 - 18.06.2020 – 01

i nf o @s e ns op a rt .d e

France G reat Britain USA China


Se ns oP a rt F ra n ce SA RL Se ns oP a rt UK Li m i t e d Se ns oP a rt I nc . Se ns oP a rt ( S ha ng ha i ) C o. Lt d .
7 7 4 2 0 C ha m p s s ur Ma r ne P e ra B us i n e s s P a rk , N ot t i ng ha m R o a d , P e r rys b u rg O H 4 3 5 5 1 2 0 1 8 0 3 Sh a n g h a i
Te l .: + 3 3 1 6 4 7 3 0 0 6 1 Me l t o n M ow b ra y, L e i c e s t e rs hi re , L E1 3 0 P B Te l .: + 1 8 6 6 2 8 2 7 6 1 0 Te l .: + 8 6 2 1 6 9 0 1 7 6 6 0
i nf o @s e ns op a rt .f r Te l .: + 4 4 1 6 6 4 5 6 1 5 3 9 us a @s e n op a rt .c o m ch i n a @s e ns op a rt . c n
uk @s e ns op a rt .c o m

You might also like