EXPERIMENTAL STUDIES

The instruments we use in the experiment are a personal computer, a common CCD, a white light machine, and a robot. We put the CCD in front of the eyes and make the light shine from the lower part the face. For simple processing of shadows, we use a black background. Through in the algorithm mentioned above, we can locate the eye window more quickly and do the eye template comparison. It takes about 1 to 2 seconds from capturing shadows by the CCD to completing the eye template comparison. The processed shadow is shown as Figure 6-1,6-2:

 

 

Fig. 6-1 The outcome of the eye window as the eye is open.

 

Fig. 6-2 The outcome of the eye window as the eye is closed.

 

Figure 6-1 is the outcome of the eye window as the eye is open. It is clear that the hairy part is a dark block after the binary processing. After our analysis, its searching goes on by the search checker a. On the other hand, the face part is a bright block and is searched by the search checker b. After finding the approximate locations of the eye windows, we use the eye templates for comparison. Figure 6-2 is the outcome of the eye window as the eye is closed. Through simple steps, the shadows resulting from the ends of the eyelids may be taken as the eye windows. Though after the eye template comparison, we can definitely know that the shadow is not the iris part. When we find the approximate locations of the eye windows, we can lock this region and drive the auto-zooming device for advanced analysis (figure 7).

Fig.7 The locked region in one’s face

 

Fig.8 The relation between gazing direction and eyeballs

 

We can use the geometric relation between one’s gazing direction and locations of eyeballs to evaluate the user’s gazing direction. Here we introduce two ratio: L’/L and R’/R, where L and R are the width of one’s eye, L’ and R’ are the distance of the center point of eyeball to the corner. For example, in our system, if R’/R=0.3±0.03 and L’/L=0.4±0.03, the system will judge the user be gazing at the CCD camera and ready to receive the user’s blink command. If the user’s gazing direction is not toward the CCD camera, all the blink movements are meaningless. After receiving the analysis by the computers, we know whether the eyes are open or closed. Then we use that outcome to control the robots. In one of our experiments, we use the sense of sight to control the robots and then get different signal encodes which means open or closed eyes. Another experiment is using this system to dial the phone system with Rs-232 series and modem communication device. Every set of the signal encodes will give the robots a fixed action. We use the counter to make the open eyes as signal 1 and the closed eyes as signal 0. Every set of signal encodes are five numbers. The first four numbers are control codes, and the fifth number is the error detection code. After inputting the first four numbers, the outcomes will be shown on the monitor. As the first four numbers are the same as the outcomes, the error detection code 1 will execute a command since 1 means correct first four numbers. If the error detect code is 0, we will erase the input and re-input the first four numbers since the error detection code 0 means incorrect input or analysis. The inputting can be done repeatedly until the error detection code comes out 1 and will cause the execution of the command. So it takes more repeating time for one encode seven 4-bits numbers if the error rate is higher. On normal ideal tests, one can encode seven 4-bits numbers within one minute successfully without further practice. But the encoding time will increase under different conditions of using the system. Table 1 shows the encoding time of different test subjects for input seven 4-bits numbers in this system.

User

Without practice

After ten-minutes practice

Black eye color with relative big pupil size

< 1.7 min

< 1.0 min

Blue eye color with relative big pupil size

2.0 - 2.1 min

1.2 - 1.4 min

Black eye color with relative small pupil size

1.7 - 1.9 min

1.0 - 1.2 min

Blue eye color with relative small pupil size

2.1 - 2.3 min

1.5 - 1.7 min

Very Fatigued eyes

> 5.3 min

5.1 - 5.2 min

Deep shadows of the eyelids

> 5.1 min

4.3 - 4.1 min

Table 1 the influence of different test subjects in this system

 

In Xie’s experiment in 1994 [12], the methods he used in the process of search preprocessing are blue and minimum morphologic technologies. It takes 3 to 4 seconds to capture the eye window by a 33 MHz CPU PC. Using diagonal-box search without spending 1 seconds to capture the eye window in the same equipment. The diagonal-box search does indeed improve the efficiency of searching. We also find in this experiment that different faces result in different dark or bright blocks in the binary processing. The changing of the light and its aperture therefore plays an important role throughout the whole analysis. Since the change depends on each individual, we proofread the light and its aperture to make the outcome perfect as we use the diagonal-box search. The distance between the face and the CCD is also a factor that influence our analysis, while capturing the shadows by the CCD. When the distance is too small or too big, the size of the iris will be influenced. Different sizes of the iris will then result in incorrect analysis while doing the eye template comparison. The flexibility of the eye template will be our effort in the next experiment.

 

Besides, the frequency of the opening and closing of the eyes must be matched to the frequency of capturing shadows when we use winks to control the robots. We consequently use audio signals to make users know the correctness of input and the time to perform next action. When the outcome is 1, the computers will make two sounds, while the outcome is 0 the computers will make just one sound. After hearing the sounds, users can perform the next action to capture proper frequency.

 

index