Proposal of Robotic Workplace with Industrial Robot ALMEGA AX-V6

Marek Vagaš

American Journal of Mechanical Engineering

Proposal of Robotic Workplace with Industrial Robot ALMEGA AX-V6

Marek Vagaš

Department of Robotics, Faculty of mechanical engineering, Technical University of Košice, Slovakia

Abstract

The project includes an overall description and design of robotic workplace, which offers a solution for problem by separation of non-orientable components through the 3D camera system. A robotic workplace consists from conveyor belt, Mintron camera and industrial robot ALMEGA AX-V6. The components are taken from rubber belt by suction cap as an end effector system.

Cite this article:

  • Marek Vagaš. Proposal of Robotic Workplace with Industrial Robot ALMEGA AX-V6. American Journal of Mechanical Engineering. Vol. 4, No. 7, 2016, pp 368-371. http://pubs.sciepub.com/ajme/4/7/24
  • Vagaš, Marek. "Proposal of Robotic Workplace with Industrial Robot ALMEGA AX-V6." American Journal of Mechanical Engineering 4.7 (2016): 368-371.
  • Vagaš, M. (2016). Proposal of Robotic Workplace with Industrial Robot ALMEGA AX-V6. American Journal of Mechanical Engineering, 4(7), 368-371.
  • Vagaš, Marek. "Proposal of Robotic Workplace with Industrial Robot ALMEGA AX-V6." American Journal of Mechanical Engineering 4, no. 7 (2016): 368-371.

Import into BibTeX Import into EndNote Import into RefMan Import into RefWorks

At a glance: Figures

1. Introduction

The development of science and technology leads to solution of whole variety of tasks which are connected to necessary for image processing and evaluation of obtained pictures that serves for specific knowledges. Image processing is processing of images using mathematical operations by using any form of signal processing for which the input is an image, a series of images, or a video, such as a photograph or video frame; the output of image processing may be either an image or a set of characteristics or parameters related to the image [1].

For most algorithms that are used for image processing following process that applies to several points:

• Filtering of image (removing of image mistakes or minimization of influence brightness etc.)

• Transformation of image (removing of geometric deformation image influenced by lens, impossibility to fit camera at accuracy geometric center of place and also inaccurate setting of camera in vertical direction etc.)

• Segmentation (most often through thresholding of continuous colored areas)

• Connecting clusters by colored areas, determination their center of gravity (center of place) and orientation.

Recognition could be in general explained as classification of objects and phenomena it to examine in some classes. In case of image information is usually a recognized an image from 2D pictures. It may yet be a sequence of images or a single image that contains information of subject [2]. One example is analysis of robot scene that needs to know details of spatial links between objects in the frame as:

• Measurement of dimensions, determining of number, area

• Detecting of presence, determination of position

• Checking of shape and quality.

2. Phase of Image Processing

Image processing work is an area of interest which is processing data-accession i.e. visual image and mapping symbols to the data [3]. In case of video input information symbols are known or unknown figures (shapes, dimensions and location of components), also called the class of images, generally representing by real world objects.

The purpose of image processing and analysis therefore to the assignment of the symbols of objects is in image of image data, wherein the symbols have some interpretation [4]. In practice, the symbols represent the class (e.g. polyhedron, chair, water, darkness, noise, etc.).

The Mintron camera MTV-12V1-EX uses a Sony "EX-View" HAD ½” CCD-chip ICX249AL. "EX-View" is a sensitivity-enhancement technology developed by SONY to improve light sensitivity of its CCD by a factor of two for visible light and a factor of four for near-infrared wavelengths. The P/N junction of each photodiode in the CCD matrix is specially fabricated to have a much better photon-to-electron conversion efficiency.

In addition, each photodiode (representing one pixel in an image) has a microscopic lens fabricated over it to better capture and focus light onto the active semiconductor junction [5]. Below please see Figure 2, there is a diagram of its spectral response as well as pictures of the PCB with the chip.

Figure 2. Controller software “StellaCam Control” for Mintron camera system

Inclusion image of an object to virtual class (i.e. to education) is a classification according to some criteria [6]. In this case we are talking about scene analysis, which aims to search for phenomena and relationship between these phenomena. Image recognition is divided into several phases:

• Image preprocessing

• Pattern recognition

• Pattern recognition,

• Structural pattern recognition

• Recognition by using models.

During imaging is color alignment turned off also as automatic setting of brightness due to green color of place for scanning [7]. Camera is transmitting image to computer that is saved in 24b-bit RGB format. For controller is used a set of command interface protocol as the StellaCam Control software, see Figure 3.

Figure 3. Controller software “StellaCam Control” for Mintron camera system

For detection colors in image was used thresholding formulated as follows:

(1)

&

(2)

&

(3)

(4)

Expression C represents used color where was used usually colors that was chosen from shape of objects RGB was or from the center of shapes of objects because of overlap ranges that represents color.

This type of thresholding is vulnerable especially at unevenly distributed image brightness. For application of pick and place solution is very advantageous due to his simplicity and also due to speed of processing [8].

After we obtained geometric center of object and his orientation (by the second point in front) follows removing of geometric deformations by series of equations:

Pinch: fish eye removal (K)

(5)

Zoom: enlargement or reduction of an image (Zoom)

(6)

Angle: the image rotation around the frame’s centre (angle)

(7)

Shearing (PX, PY)

(8)

Shifting (Shift X, Shift Y)

(9)

It is necessary to transpose the pixels to make sure that the image centre has zero coordinates before the pixels have been transformed.

(10)

It is necessary to perform the feedback with these equations after the transformation:

(11)

3. Application at Robotized Workplace

Above the workplace is positioned camera system Mintron type who takes a conveyor belt at a certain height so that in the "image" received the required surface conveyor belt [9]. At moment when component enters into the field of view of camera system is make a series of patterns to record current position, shape and dimensions of parts [10]. This information is then distributed into the control system, which is evaluated and compared with known structures and designs classified in database system, see Figure 4.

Figure 4. Recognition and assignment of X and Y coordinates for object

Sequence will eventually generate data into the control computer for change of coordinates, location and the possibility of gripping parts. This information is by control system of industrial robot ALMEGA AX-V6 immediately approved and then accepted and carried out by suction cup system that is located directly at robot end effector [11]. Then is inserted into prepared container pallet, which is the shape and dimension established.

Production equipment that represents a machine or machine group that serves for supplies of components by conveyor are manufactured in a regular interval and placed into the conveyor belt [12].

Requirement is being designed to easily fit example slip on the conveyor belt. Conveyor belt is driven by an electric motor located directly on it [13]. It must not also forget to the need for sufficient electric power and chosen appropriate speed of belt due to congestion and limited speed recording camera system. An important part of the whole workplace is an industrial robot ALMEGA AX-V6 with some of its features, see Figure 5.

Figure 5. Robotized workplace with industrial robot ALMEGA AX-V6

The easiest way to connect the Mintron video camera to the RS232 port of your PC is to use the following cable. You only need to connect 2 wires plus the shielding of the cable as shown in the diagram [14]. You do not need to connect the other Pins unless you need them.

Interface protocol: The RS232 parameters to be used are: 9600bd, 8 bit, no parity, 1 start-bit, 1 stop-bit, LSB first. The interface is only unidirectional (only commands are sent to the camera). No information is sent from the camera back to the PC. Therefore only the TXD-line is used (besides Ground) as was explained in the previous section on how to build the cable.

List of proposed commands for use with Mintron camera: Connect the camera to a free COM-port of your PC by using the serial adaptor cable as described before. The control protocol of the interface is very simple [15]. If the interface receives a character from the PC as listed below, it will activate the button according to the received character.

Due the simple protocol there is no special control software needed, and you may also use other devices like a PDA to control the camera. Furthermore you are not dependent to an operating system like Microsoft Windows because you just need any program which can send serial characters to the camera.

Figure 6. Proposed shortcut commands for controlling of Mintron camera

These commands (see Figure 6) show an advanced connection for the Mintron camera. The connector at the rear panel of Mintron connects to RS232 port of PC, supplies 12V DC power to Mintron, delivers video output signal to a video switch which is controlled by a signal also routed via this connector.

4. Conclusion

There is no doubt that coming period will be based on method and image processing system for further improved. This suggests the need for developing of different knowledge bases and expert systems that would be integrated with user and thus helping him for finding correct and reliable solutions to challenges in field of image processing that becomes particularly important.

Essential prerequisite for building such systems is further reprocessing assessment methodologies - stuffed of each class images - and greater linkages with these methodologies systematized to the accumulated knowledge base of the discipline.

Next improvement we can reach through the using of automatic setting of gain that was not possibly to use at this application because we have green color of surface from belt conveyor (other colors was overexposed). By using of suitable color and additional algorithms with automatic setting of parameters we want to approach to 0% of errors from image processing at this application.

Acknowledgements

This contribution is the result of the project implementation: ITMS 26220220182 “University scientific park TECHNICOM for innovating applications with knowledge technology support”.

This contribution is the result of the project implementation: KEGA 059TUKE-4-2014 Development of quality of life, creativity and motor skills for disabilities and older people with the support of robotic devices.

References

[1]  M. Mody et al., “Image signal processing for front camera based automated driver assistance system,” Consumer Electronics - Berlin (ICCE-Berlin), 2015 IEEE 5th International Conference on, Berlin, 2015, pp. 158-159.
In article      View Article
 
[2]  T. Iwane, “Light field camera and integral 3D display: 3D image reconstruction based on lightfield data,” Information Optics (WIO), 2014 13th Workshop on, Neuchatel, 2014, pp. 1-4.
In article      View Article
 
[3]  W. Yang and D. Liu, “A pricise phase-preserving image formation algorithm for topsar data processing,” Synthetic Aperture Radar (APSAR), 2015 IEEE 5th Asia-Pacific Conference on, Singapore, 2015, pp. 428-430.
In article      View Article
 
[4]  Koichiro Deguchi: A Direct interpretation of dynamic images with camera and object motions for vision guided robot control. In.: International Journal of Computer Vision 37(1), 7-20, 2000.
In article      View Article
 
[5]  Hricko, Jaroslav - Havlík, Štefan. Design of Compact Compliant Devices – Mathematical Models vs. Experiments. In: American Journal of Mechanical Engineering, vol. 3, no. 6 (2015): 201-206.
In article      
 
[6]  Vince, T., Kováč, D., Molnár, J.: VMLab in the Education, In: Sistemas y Tecnologías de Información: Actas de la 7ª Conferencia Ibérica de Sistemas y Tecnologías de Información: 20. - 22.6.2012: Madrid s. 334-338.
In article      
 
[7]  Lampert, C.H. & Peters, J. J Real-Time Image Proc (2012) 7: 31.
In article      View Article
 
[8]  Osman Ibrahim, Hazem El Gendy, and Ahmed M. ElShafee. Speed detection camera system using image processing techniques on video streams. In.: International Journal of Computer and Electrical Engineering, Vol. 3, No. 6, December 2011.
In article      
 
[9]  http://www.atmel.com/images/issue4_pg39_43_robotics.pdf.
In article      
 
[10]  R. Belko. Vizuálne systémy priemyselných robotov. Diplomová práca, 2012.
In article      
 
[11]  http://robotiq.com/wp-content/uploads/2015/06/How-to-Choose-the-Right-End-Effector-F.pdf%3FsubmissionGuid%3De9cff220-d1e3-444c-9f69-e8d4ec716fff.
In article      
 
[12]  Ik Sang Shin, Sang-Hyun Nam, Hyun Geun Yu, Rodney G. Roberts, Seungbin B. Moon. In.: Conveyor Visual Tracking using Robot Vision. 2006 Florida Conference on Recent Advances in Robotics, May 25-26, FCRAR 2006.
In article      
 
[13]  Akec, J. A. Steiner, S. J., and Stenger, F. An experimental visual feedback control system for tracking applications using a robotic manipulator. Industrial Electronics Society, 1998. IECON '98. Proceedings of the 24th Annual Conference of the IEEE vol 2, (31 Aug.-4 Sept. 1998) 1125-1130.
In article      View Article
 
[14]  Papanikolopoulos, N. P., Khosla, P.K., and Kanade, T. Visual tracking of a moving target by a camera mounted on a object: a combination of control and vision, Robotics and Automation, IEEE Trans., vol. 9 (1993) 14-35.
In article      
 
[15]  Sitti, M., Bozma, I., and Denker, A. Visual tracking for moving multiple objects: an integration of vision and control, Industrial Electronics, 1995, ISIE '95., Proceedings of the IEEE International Symposition on vol. 2, (1995) 535-540.
In article      
 
  • CiteULikeCiteULike
  • MendeleyMendeley
  • StumbleUponStumbleUpon
  • Add to DeliciousDelicious
  • FacebookFacebook
  • TwitterTwitter
  • LinkedInLinkedIn