Background/Aims: In exploring human factors, stereoscopic 3D images have been used to investigate the neural responses associated with excessive depth, texture complexity, and other factors. However, the cortical oscillation associated with the complexity of stereoscopic images has been studied rarely. Here, we demonstrated that the oscillatory responses to three differently shaped 3D images (circle, star, and bat) increase as the complexity of the image increases. Methods: We recorded simultaneous EEG/MEG for three different stimuli. Spatio-temporal and spatio-spectro-temporal features were investigated by non-parametric permutation test. Results: The results showed that N300 and alpha inhibition increased in the ventral area as the shape complexity of the stereoscopic image increased. Conclusion: It seems that the relative disparity in complex stereoscopic images may increase cognitive processing (N300) and cortical load (alpha inhibition) in the ventral area.

Because of the increasing attention to 3D content, including 3D movies and TV, augmented reality, and head-mounted displays (HMD), research on stereoscopic displays has focused on the results of neuroimaging to optimize 3D content so that producers or manufacturers can guarantee a high quality experience (QoE), and for viewer safety [1,2]. According to the recent literature, excessive disparity (3D image depth), texture complexity, disparity gradient, and object movement are potential factors involved in visual fatigue or discomfort.

Charles Wheatstone in 1838 found that two image of the same visual scene with even small horizontal disparity give depth perceptual experience for human visual system. Binocular fusion in visual cortex of two images with some disparity provides the depth perception. Thus, depth perception does not happen in the set of points with zero disparity, called horopter [3]. In addition, disparity between two images makes an angular or retinal disparity. There exists two disparities - convergent (negative or crossed) and divergent (positive or uncrossed) disparities. Stereoscopic viewer may feel that the convergent object seems to be before the screen or closer to viewer's eyes. By contrast, images of divergent disparity make viewer feel that an object seems to be behind the screen or farther to viewer's eyes.

From a neuroscience perspective, 3D images are processed through binocular depth perception in the visual cortex. With the benefits of functional magnetic resonance imaging (fMRI), neuroscientists have pinpointed the role of neurons in the visual cortex and suggested a two-streams hypothesis for the neural processing associated with human vision: dorsal and ventral pathways [4,5,6,7,8]. The dorsal stream is referred to as the “where” stream; this pathway begins in the visual cortex in the occipital lobe and proceeds to the parietal lobe, where it is involved in motion and depth perception. The ventral stream is referred to as the “what” stream; this pathway, which begins in the visual cortex and proceeds to the medial temporal lobe, is related to object recognition and identification.

Research on stereoscopic 3D displays has focused on excessive depths of 3D content [9,10,11,12,13]; however, we found no studies on the complexity of the texture of 3D images. Results on the complexity of the texture of 2D stimuli in the ventral stream have been reported in several studies and are summarized as follows: 1) the duration of electroencephalogram (EEG) desynchronization was longer for complex 2D stimuli than simple ones [14]; 2) the N350 component was modulated by visual complexity [15], and 3) the gamma response was involved in complex visual stimuli [16,17].

In this study, we hypothesized that the cortical load in the ventral area may increase as the shape complexity of stereoscopic image increases. Accordingly, we designed single trial experiments with three differently shaped stereoscopic images to test our hypothesis, collected simultaneous EEG and magnetoencephalogram (MEG) data from 10 healthy subjects, and explored the oscillatory correlates of stereoscopic shape complexity that may be applicable to its real-time assessment.

Experimental paradigm and materials

We recruited ten subjects to participate in this experiment. All were healthy, right-handed adults, including six males, with a mean age of 24.4 ± 2.99 years. We informed them of the purpose of our study, as well as the details of the experimental procedure, and all subjects signed a written informed consent; the Institutional Review Board of the Gwangju Institute of Science and Technology approved this study officially (No. 20150615-HR-18-02-01).

Each trial consisted of a 2-second fixation on a random dot stereogram (RDS) of zero degrees, 6-second stimulation with one of three differently shaped 3D stereoscopic RDS images (800*450 pixels of anaglyph), and a 3-second rest period, as depicted in Fig. 1. 50 trials were collected for each shape at a viewing distance of 1000 mm, with a pixel length of 0.4 mm on the screen, and a pixel disparity of 8 pixels in the 3D images. For example, for a subject with 65 mm of pupil distance, an 8-pixel disparity is equal to -0.18 degrees of retinal disparity (within Percival's comfort zone [1,2]); the object appears to float above the screen, and the distance between the floating object and screen is 47 mm (vergence distance). Furthermore, an example of ‘Star' shaped stereoscopic image is shown in Fig. 1B.

Fig. 1

Experimental paradigm and materials. (A) One trial in the experimental paradigm. We selected three differently shaped stereoscopic objects (circle, star, and bat) of fixed depth. Each trial consisted of a 2-second long interval for 2D fixation, 6 second-long 3D stimulus presentation after onset, and a 3 second-long rest period. (B) An example of ‘Star' shaped stereoscopic depth image, which is can be seen by anaglyph glass. (C) Shape complexities for three different images. The values were estimated by applying 3 × 3 Laplacian filtering to shape images.

Fig. 1

Experimental paradigm and materials. (A) One trial in the experimental paradigm. We selected three differently shaped stereoscopic objects (circle, star, and bat) of fixed depth. Each trial consisted of a 2-second long interval for 2D fixation, 6 second-long 3D stimulus presentation after onset, and a 3 second-long rest period. (B) An example of ‘Star' shaped stereoscopic depth image, which is can be seen by anaglyph glass. (C) Shape complexities for three different images. The values were estimated by applying 3 × 3 Laplacian filtering to shape images.

Close modal

To estimate shape complexity, we introduced Laplacian filtering with 3 × 3 filtering window. The formula is defined as

Here the raw image values are set to 1 or 0 value, where pixels within the object area (in the circle, star, and bat) set to 1 value; other pixels set to 0 value. In formula (1), y is Laplacian filtered value, X indicates 3 × 3 matrix containing raw image values in the filtering window, and X0,0 is a center pixel. We calculated y value for each pixel in the shape image. We took absolute value of y at each pixel and estimated average over pixels in the shape image. Then, this average value represents the length of edges over unit area (1 × 1 pixel2) and was defined as a shape complexity. The shape complexities for circle, star, and bat image are shown in Fig. 1C.

Simultaneous EEG/MEG data recording

We collected simultaneous EEG/MEG data (Fig. 2) in a magnetically shielded room at the Korea Research Institute of Standard and Science (KRISS) in Daejeon, South Korea. To record EEG simultaneously, 19 magnetically compatible EEG electrodes (Fig. 2A) were used with a 1024 Hz sampling rate, notch filtering at 60 Hz, and a Biosemi amplifier; these electrodes were attached to the scalp according to the international 10-20 system. The KRISS MEG consists of 152 channels (Fig. 2B) of axial gradiometer with a 1024 Hz sampling rate and 60 Hz notch filtering. In addition, electrooculogram (EOG) and electrocardiogram (ECG) were also collected and used to remove EEG and MEG artifacts.

Fig. 2

EEG/MEG channel locations. (A) 19 EEG channel locations based on international 10-20 system. (B) 149 MEG channel locations (the 1st, 130th, and 131st channels were dropped because of poor conditions)

Fig. 2

EEG/MEG channel locations. (A) 19 EEG channel locations based on international 10-20 system. (B) 149 MEG channel locations (the 1st, 130th, and 131st channels were dropped because of poor conditions)

Close modal

Preprocessing

After visual inspection, we rejected three bad channels (the 1st, 130th, and 131th) among 152 channels in the MEG data. EEG, MEG, EOG, and ECG data were bandpass-filtered with 1-200 Hz, and all data (MEG, EEG) were down-sampled to 512 Hz. Artifacts from eye blinking, components of eyeball and muscle movements, and heart rate were detected and removed by independent component analysis (ICA) [18]. Thereafter, bad trials that yielded ± 150 μV for EEG and ± 500 fT for MEG were identified and rejected automatically [19,20]. Laplacian spatial filtering was applied to the EEG data to increase the signal-to-noise ratio (SNR), and MEG axial gradiometer data were converted to planar gradiometer data for our analysis.

Cluster-based permutation test

A cluster-based, nonparametric permutation test [21] was adopted from the Fieldtrip toolbox [22] for a multiple comparison test on the EEG/MEG data. Adjacent spatio-temporal and spatio-spectro-temporal points were clustered according to their significances for event-related potential/field (ERP/ERF) and event-related desynchronization/synchronization (ERD/ERS) analyses [23], respectively. The corrected p-value was estimated by a Monte-Carlo simulation for each cluster point and corresponded to the cluster-level statistic. Assuming a Type I error of p < 0.05 for an individual spatio-temporal/spatio-spectro-temporal point, the cluster-based permutation test revealed that significance could be demonstrated only when more than two contiguous original spatio-temporal/spatio-spectro-temporal points reached the given level of significance, regardless of cluster shape. Therefore, the difference between the two conditions at each of the spatio-temporal points was replaced by a single comparison using cluster-level statistics.

Cluster-based permutation tests were applied to ERP/ERF and TF analyses for the EEG/MEG data to perform a comparative study of the shapes (circle vs. star and bat). For the ERP/ERF analysis, we used the dependent sample t-statistic, all with p < 0.05. The procedure for the cluster-based permutation test is depicted for ERP/ERF in Fig. 3, and as follows:

Fig. 3

Procedure for cluster-based permutation test in ERP/ERF (spatio-temporal data) analysis. First, t-tests were conducted for each channel and time point over subjects. Then, we obtained an uncorrected spatio-temporal t-value map. Second, we clustered the selected samples in connected sets based on spatio-temporal adjacency. Positive or negative t-values in a cluster were summed separately. Third, we permuted the ERP/ERF without condition; the condition was the circle, star, or bat image. After the permutation, we performed t-tests for each channel and time point. Fourth, we iterated the third procedure 1000 times to obtain 1000 t-value maps. Fifth, we used the largest of the cluster-level statistics for each of the 1000 t-value maps. Sixth, we constructed a histogram of the largest values and a probability density function based on the cluster-level statistics. Finally, we obtained a p-value that was approximated and corrected by this nonparametric permutation test from the probability density function.

Fig. 3

Procedure for cluster-based permutation test in ERP/ERF (spatio-temporal data) analysis. First, t-tests were conducted for each channel and time point over subjects. Then, we obtained an uncorrected spatio-temporal t-value map. Second, we clustered the selected samples in connected sets based on spatio-temporal adjacency. Positive or negative t-values in a cluster were summed separately. Third, we permuted the ERP/ERF without condition; the condition was the circle, star, or bat image. After the permutation, we performed t-tests for each channel and time point. Fourth, we iterated the third procedure 1000 times to obtain 1000 t-value maps. Fifth, we used the largest of the cluster-level statistics for each of the 1000 t-value maps. Sixth, we constructed a histogram of the largest values and a probability density function based on the cluster-level statistics. Finally, we obtained a p-value that was approximated and corrected by this nonparametric permutation test from the probability density function.

Close modal

0) We calculated the average ERP/ERF over 50 trials. Each subject had four ERPs/ERFs corresponding to the three images. For example, if we compared the ERPs over subjects between circle versus bat, the t-value was calculated for each spatio-temporal point because the ERP/ERF data are a channel by time matrix.

1) For the cluster-based permutation test, we used all t-values with a p-value < 0.05. Here, the p-value was calculated by a parametric t-test and was not corrected statistically. In addition, we summed all the positive or negative t-values within the clusters separately. The summed values constituted the cluster-level statistics, for which we approximated the significance.

2) The selected t-values were clustered based on spatio-temporal adjacency. The minimum size of a cluster was set to two points. A neighboring channel was defined as spatial adjacency within 4 cm [21]. We note that channel switching makes no difference in our analysis since neighboring channels are determined according to their spatial adjacency.

3) We shuffled the conditioned trials, divided the shuffled trials into two datasets, and then conducted a t-test for the two sets to obtain a t-value map.

4) We used a Monte Carlo simulation of 1000 iterations of step three to approximate the cluster-level p-value.

5) We took the largest of the cluster-level statistics for each permutation result and obtained 1000 values of the cluster-level statistics.

6) We constructed a histogram of the 1000 values of the cluster-level statistics, and a probability density function (PDF) was calculated to estimate the cluster-level p-values. The input for the PDF was the cluster-level statistics from the first step, while the output was a p-value for each cluster-level statistic. Thus, the cluster-level p-values were corrected and approximated by a cluster-based permutation test, because multiple comparisons were transformed into a single cluster-level comparison.

For the TF analysis of the EEG/MEG data, we calculated the power spectra for each channel under the following conditions: multi-taper TF transformation based on multiplication in the frequency domain, Hann taper, 1-200 Hz frequency of interest (200 bins), -1000 to 3000 ms time of interest (81 points), and 7 cycles for each frequency bin. Each channel includes a TF map, so the feature space is spatio-spectro-temporal space. By using the TF map, we can study the temporal behavior of frequency components over channels. The procedure for the cluster-based permutation test for TF analysis was the same as that for the ERP/ERF. The only difference was dimension. Each TF datum over channels was three-dimensional, as were the clusters detected.

For better representation of the results, we used the cluster as a feature extraction filter in ERP/ERF analysis, as follows:

where s and t indicate spatial and temporal indices, respectively; S and T are sets of whole channels and time points in this analysis, respectively. Xst is the average of the spatio-temporal data over trials in one subject. wst is a spatio-temporal temporal weight representing a value of 0 or 1. For the the spatio-temporal point (s,t) in the cluster, wst is 1, and 0 otherwise.

For TF analysis, the frequency bin index fϵF can be added, as follows:

where Xsft is an averaged TF map over trials for one subject. For topographies, the values in a cluster were summed over each channel. This allowed us to obtain a vector v = [v1, v2,….vi, …vd]T for both formulas (2) and (3), where d is the number of channels. We then plotted the vector for topographical representation.

Event-related potentials from EEG

The summed visual EEG-ERPs (q in Equation (2)) over all subjects at a cluster differed somewhat (p = 0.08) between the circle and bat images; as the complexity of the shape of the stereoscopic image increased (from circle to bat), larger negative potentials were observed in terms of average, as shown in Fig. 4A. The cluster comparing circle and bat covered the parietal, occipital, and temporal areas within 370-416 msec, as shown in Fig. 4B. The visual EEG-ERPs were located spatially in the occipital area (Fig. 4C). There were differences between the circle and bat images (cyan shaded interval in Fig. 4D) at a latency of 370-420 ms after stimulus onset (temporal cluster).

Fig. 4

EEG event-related potentials (ERP) for the three 3D images. There was a cluster (p = 0.08) between the circle and bat images within 370-420 ms after stimulus onset: (A) Summed ERPs (quantified EEG) in the clusters for the three images; (B) Cluster location detected in spatio-temporal space for comparison of circle vs. bat. The cluster ranged from 371-416 msec. (C) Topographies of the three images showing the source patterns of ERPs at a time interval (370-420 ms), and (D) ERPs at Pz and O2 channels for the three images. The time interval (370-420 ms) is shaded in cyan.

Fig. 4

EEG event-related potentials (ERP) for the three 3D images. There was a cluster (p = 0.08) between the circle and bat images within 370-420 ms after stimulus onset: (A) Summed ERPs (quantified EEG) in the clusters for the three images; (B) Cluster location detected in spatio-temporal space for comparison of circle vs. bat. The cluster ranged from 371-416 msec. (C) Topographies of the three images showing the source patterns of ERPs at a time interval (370-420 ms), and (D) ERPs at Pz and O2 channels for the three images. The time interval (370-420 ms) is shaded in cyan.

Close modal

Cortical oscillatory responses from MEG

There were significant oscillatory responses in the MEG data. The summed power spectra in a cluster (q in Equation (3)) over all subjects differed significantly (p = 0.01) between the circle and bat images, as shown in Fig. 5A. The significant cluster covered the parietal, right central, right temporal, and occipital areas (spatial, as shown in Fig. 5B), 8-25 Hz (spectral alpha and beta bands), and 500-1100 msec (temporal). The summed power spectra decreased as the complexity of the stereoscopic image increased, as shown in Fig. 5A. The alpha and beta activities of the cluster were located spatially in the right parietal and occipital areas (Fig. 5C), which is similar to the EEG behavior, which exhibited negative potentials. According to the TF maps (at the 119th MEG channel) for the three different images, alpha and beta ERD [23] were prominent between 500-1100 msec after stimulus onset (Fig. 5D).

Fig. 5

MEG event-related desynchronizations (ERD) for the three 3D images: (A) Summed ERDs in the significant clusters for the three images; note that the circle and bat images differed significantly (p = 0.01); (B) Cluster location (red stars) detected in spatio-temporal space for comparison of circle vs. bat. The cluster ranged from 650-1199 msec. (C) Topographies for the three images showing the source patterns of ERDs within 500-1100 ms, and (D) ERD patterns on a time-frequency (TF) map at the 119th channel (marked with red arrow in (C)). Small squares in the TF maps represent the locations of clusters in the time-frequency domain.

Fig. 5

MEG event-related desynchronizations (ERD) for the three 3D images: (A) Summed ERDs in the significant clusters for the three images; note that the circle and bat images differed significantly (p = 0.01); (B) Cluster location (red stars) detected in spatio-temporal space for comparison of circle vs. bat. The cluster ranged from 650-1199 msec. (C) Topographies for the three images showing the source patterns of ERDs within 500-1100 ms, and (D) ERD patterns on a time-frequency (TF) map at the 119th channel (marked with red arrow in (C)). Small squares in the TF maps represent the locations of clusters in the time-frequency domain.

Close modal

Cognitive responses

N350 in the fronto-central sites is known to indicate an object-matching process, and responds with higher amplitudes to images that cannot be defined in a straightforward manner [15,24,25]. However, we observed that, after 300 ms, the stereoscopic star and bat images (complex images) elicited larger negative potentials in the occipital lobe than did the circle image (simple image), as shown in Fig. 4B and 4C. The observed time window was similar to that for N350, but the location was very different from the N350 component found in 2D visual complexity study [15,24,25]. As for stereoscopic 3D stimuli, Sahinoğlu [12] reported that the amplitude of the N300 component within intervals of 200-400 msec from the left and right occipital cortices increased in response to the depth of convergent disparities. The role of N300 in the occipital area is known to be related to disparate stimuli [26,27,28,29,30]. Our differently shaped stimuli were also convergent (negative) disparities. In contrast to Sahinoğlu's study, we used shapes that differed in convergent disparity, not depth. Our N300 may be related to disparate stimuli because complex stereoscopic images have more disparate points on the border between 0 disparity and convergent disparity. The N300 in our results increased as the complexity of the object's shape (circle, star, to bat) increased. However, similar to Sahinoğlu's work, the N300 in the right hemisphere (O2 channel) in our results seemed larger (but not significant) than that in the left (Fig. 4C). Thus, we suggest that depth levels and complexity of a stereoscopic object may modulate N300 amplitude in the occipital area.

Cortical oscillatory processing

In the past, Berlyne et al. [14] reported that with more complex visual stimuli, EEG-ERD has a longer duration in the anterior occipital area. Recently, Jensen et al. [31] reported that “alpha inhibition,” or ERD of the alpha band, is involved in functional activation of large-scale neuronal groups in the cortical area. Here, we observed relative longer durations of alpha inhibition with the bat image in the right occipital area (third topography in Fig. 5C); however, the alpha inhibition appeared not only in the occipital area, but also in the right central, right parietal, and right temporal areas, as shown in Fig. 5B. The alpha inhibition began at almost the same time after N300 for all three images; however, the duration of alpha inhibition was longer with the bat image, so that a significant cluster was detected at approximately 1000 msec, which is far from N300. The circle and star images exhibited ERS earlier than did the bat image stimulus.

The locations where alpha inhibition was detected, the right central, right parietal, right temporal areas, and the occipital area, are related to both the dorsal and ventral areas [4,5]. According to Parker et al. [5], we expected that our stereoscopic stimuli might cause relative disparity in both eyes and the brain. Further, V2, V4, a collection of areas in the anterior inferior temporal cortex (TEs), the V5/medial temporal area (MT), and medial superior temporal area (MST) are involved in relative disparity [5,6]. V2 is located in the early visual cortex. V4 and TEs are found in the ventral area, while V5/MT and MST are located in the dorsal area. In particular, V4 is tuned for spatial frequency and object features of intermediate complexity [32,33], while V5/MT and MST are related to surface separation [5]. From these, we inferred that the difference in alpha activity between the circle and bat stereoscopic stimuli might stem from V4, V5/MT, and MST. Unfortunately, it was not possible to verify this inference by localizing (projecting) the difference in alpha activity onto the source space in the brain, as a realistic head model for each subject was unavailable. However, by referring to the role of V4, we expect that the alpha inhibition stems from neural processing in the ventral area.

On the other hand, further investigation is required to determine why significant clusters were found only in the right hemisphere. Similarly, N300 also showed stronger amplitude in the right hemisphere than in the left. Hanslmyr et al. [34] reported that there is a causal relationship between alpha rhythm and ERPs. For stereoscopic stimuli, it is inferred that N300 and alpha inhibition also may be related to each other, because the two activities were observed in the right visual area, and alpha inhibition was observed after N350. We will address this issue in future studies.

In addition, we found no high gamma activity with either EEG or MEG, while depth electrode studies [16,17] have shown that complex stimuli induce strong modulations in the high gamma band. Deep sources are more difficult to detect in MEG than in EEG [35], and we expect that far more trials (approximately ten times more than in our experiment) may be required to detect deep sources [35]; however, it is difficult to conduct such studies, as the subjects would likely become exhausted during the experiment.

In summary, alpha inhibition increased as the shape complexity of the stereoscopic image increased, and we inferred that this inhibition might originate from neural processing in V4 in the ventral area. While the N300 component is a time-locked measure, alpha inhibition in the visual area is a spectral behavior independent of time onset; thus, we expect that it can be applied easily in real-time monitoring. Therefore, monitoring alpha inhibition in the V4 area may be used as a real-time indication of cortical load associated with the shape complexity of stereoscopic images.

Quantitative measurement for shape complexity

In this work, Laplacian filtering (3 × 3 window, as shown in Equation (1 was applied to estimate shape complexity, as shown in Fig. 1C. The complexity values of ‘Circle', ‘Star', and ‘Bat' were estimated as 3.4×10-3, 5.4×10-3, and 7.8×10-3, respectively. Complexity differences of ‘Star' vs. ‘Bat' and ‘Circle' vs. ‘Star' were about 2×10-3; these comparisons were not significantly different in our neural responses. However, complexity difference of ‘Circle' vs. ‘Bat' was about 4 10-3 and strong significant difference in MEG (but mildly significant in EEG) was observed.

In addition to Laplacian filtering, it may be possible to consider various computational complexity metrics which take into account spatial spectrum powers in shape image, counting of straight lines or angle changes along the boundary of shape. However, it is not clear how these metrics may reflect real human perception. Our main interest was to find quantitative shape complexity metric reflecting what human really perceives. To the best of our knowledge, any quantitative shape complexity metrics considering human perception was not found. Like this work, oscillatory brain responses to various shapes would be a good approach in seeking metrics correlated with real human perception, which is challenging.

EEG and MEG

Although we recorded EEG/MEG activity simultaneously, we obtained independent EEG-ERP and MEG oscillatory results. One possible reason for this finding is that MEG is more sensitive to tangential components of a current source in a spherical volume conductor than is EEG; however, EEG detects both tangential and radial components [36]. Thus, scalp EEG can detect activity both in the sulci and at the top of the cortical gyri, whereas MEG is most sensitive to activity originating in the sulci. Therefore, we inferred that EEG-ERP results may originate in the cortical gyri, while oscillatory MEG results may originate in the sulci. We note that ERF from MEG and ERD/ERS from EEG (not shown here) were investigated, but were not notably significant.

We collected simultaneous EEG/MEG data for 3D stereoscopic image stimuli with various complex shapes and then investigated the cortical responses to these images. Our hypothesis was that cortical load might increase as complexity of the stereoscopic image increases. In group analyses, we observed increased cognitive responses of N300 and alpha ERD in the ventral area as the shape complexity increased from the circle and star to the bat image. The N300 results differed from those of conventional studies on 2D shape complexity (N350). In addition, alpha inhibition in the ventral area may be a real-time indication of cortical oscillatory processing load in perception of shape complexity of the stereoscopic images. Our future work will explore the causal relationships between N300 and alpha inhibition, and perform real-time measures of cortical load in the ventral stream.

This work was supported by the Ministry of Culture, Sports and Tourism (MCST) and the Korea Creative Content Agency (KOCCA) in the Culture Technology (CT) Research & Development Program 2016.

There are no conflicts of interest.

1.
Lambooij M, Fortuin M, Heynderickx I, IJsselsteijn W: Visual Discomfort and Visual Fatigue of Stereoscopic Displays: A Review. J Imaging Sci Technol 2009;53:30201-1-30201-14.
2.
Urvoy M, Barkowsky M, Callet PL: How visual fatigue and discomfort impact 3D-TV quality of experience: a comprehensive review of technological, psychophysical, and psychological factors. Ann Telecommun - Ann Télécommunications 2013;68:641-655.
3.
Davson H: Physiology of the Eye. Br Med J 1951;1:1433.
4.
Backus BT, Fleet DJ, Parker AJ, Heeger DJ: Human cortical activity correlates with stereoscopic depth perception. J Neurophysiol 2001;86:2054-2068.
5.
Parker AJ: Binocular depth perception and the cerebral cortex. Nat Rev Neurosci 2007;8:379-391.
6.
Roe AW, Parker AJ, Born RT, DeAngelis GC: Disparity channels in early vision. J Neurosci 2007;27:11820-11831.
7.
Hebart MN, Hesselmann G: What visual information is processed in the human dorsal stream? J Neurosci 2012;32:8107-8109.
8.
Ban H, Preston TJ, Meeson A, Welchman AE: The integration of motion and disparity cues to depth in dorsal visual cortex. Nat Neurosci 2012;15:636-643.
9.
Emoto M, Niida T, Okano F: Repeated vergence adaptation causes the decline of visual functions in watching stereoscopic television. Disp Technol J Of 2005;1:328-340.
10.
Yang C-Y, Hsieh J-C, Chang Y: An MEG study into the visual perception of apparent motion in depth. Neurosci Lett 2006;403:40-45.
11.
Kim D, Jung YJ, Kim E, Ro YM, Park H: Human brain response to visual fatigue caused by stereoscopic depth perception [Internet]; in : Digital Signal Processing (DSP), 2011 17th International Conference on. IEEE, 2011, [cited 2016 Apr 6], pp 1-5.
12.
Cho H, Kang M-K, Yoon K-J, Jun SC: Feasibility study for visual discomfort assessment on stereo images using EEG. 3D Imaging IC3D 2012 Int Conf On 2012;1-6.
13.
Sahinoğlu B: Depth-related visually evoked potentials by dynamic random-dot stereograms in humans: negative correlation between the peaks elicited by convergent and divergent disparities. Eur J Appl Physiol 2004;91:689-697.
14.
Berlyne DE, McDonnell P: Effects of stimulus complexity and incongruity on duration of EEG desynchronization. Electroencephalogr Clin Neurophysiol 1965;18:156-161.
15.
Martinovic J, Gruber T, Müller MM: Coding of Visual Object Features and Feature Conjunctions in the Human Brain. PLOS ONE 2008 21;3:e3781.
16.
Oya H, Kawasaki H, Howard MA, Adolphs R: Electrophysiological Responses in the Human Amygdala Discriminate Emotion Categories of Complex Visual Stimuli. J Neurosci 2002 ;22:9502-9512.
17.
Lachaux J-P, George N, Tallon-Baudry C, Martinerie J, Hugueville L, Minotti L, Kahane P, Renault B: The many faces of the gamma band response to complex visual stimuli. NeuroImage 2005;25:491-501.
18.
Jung T-P, Makeig S, Humphries C, Lee T-W, McKEOWN MJ, Iragui V, Sejnowski TJ: Removing electroencephalographic artifacts by blind source separation. Psychophysiology 2000;37:163-178.
19.
Daly I, Pichiorri F, Faller J, Kaiser V, Kreilinger A, Scherer R, Müller-Putz G: What does clean EEG look like?; in: 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society. 2012, pp 3963-3966.
20.
Muthukumaraswamy S: High-frequency brain activity and muscle artifacts in MEG/EEG: a review and recommendations. Front Hum Neurosci 2013;7:138.
21.
Maris E, Oostenveld R: Nonparametric statistical testing of EEG- and MEG-data. J Neurosci Methods 2007;164:177-190.
22.
Oostenveld R, Fries P, Maris E, Schoffelen J-M: FieldTrip: Open Source Software for Advanced Analysis of MEG, EEG, and Invasive Electrophysiological Data. Comput Intell Neurosci 2010;2011:e156869.
23.
Pfurtscheller G, Lopes da Silva FH: Event-related EEG/MEG synchronization and desynchronization: basic principles. Clin Neurophysiol 1999;110:1842-1857.
24.
Schendan HE, Kutas M: Time Course of Processes and Representations Supporting Visual Object Identification and Memory. J Cogn Neurosci 2003;15:111-135.
25.
Schendan HE, Kutas M: Neurophysiological Evidence for the Time Course of Activation of Global Shape, Part, and Local Contour Representations during Visual Object Categorization and Memory. J Cogn Neurosci 2007;19:734-749.
26.
Chao GM, Odom JV, Karr D: Dynamic stereoacuity: a comparison of electrophysiological and psychophysical responses in normal and stereoblind observers. Doc Ophthalmol Adv Ophthalmol 1988;70:45-58.
27.
Fenelon B, Neill RA, White CT: Evoked potentials to dynamic random dot stereograms in upper, center and lower fields. Doc Ophthalmol Adv Ophthalmol 1986;63:151-156.
28.
Janssen P, Vogels R, Orban GA: Assessment of stereopsis in rhesus monkeys using visual evoked potentials. Doc Ophthalmol Adv Ophthalmol 1998 1999;95:247-255.
29.
Neill RA, Fenelon B: Scalp response topography to dynamic random dot stereograms. Electroencephalogr Clin Neurophysiol 1988;69:209-217.
30.
Regan D, Beverley KI: Electrophysiological evidence for existence of neurones sensitive to direction of depth movement. Nature 1973;246:504-506.
31.
Jensen O, Mazaheri A: Shaping functional architecture by oscillatory alpha activity: gating by inhibition. Front Hum Neurosci 2010;4:186.
32.
Pasupathy A, Connor CE: Population coding of shape in area V4. Nat Neurosci 2002;5:1332-1338.
33.
Umeda K, Tanabe S, Fujita I: Representation of stereoscopic depth based on relative disparity in macaque area V4. J Neurophysiol 2007;98:241-252.
34.
Hanslmayr S, Klimesch W, Sauseng P, Gruber W, Doppelmayr M, Freunberger R, Pecherstorfer T, Birbaumer N:: Alpha phase reset contributes to the generation of ERPs. Cereb Cortex N Y N 1991 2007;17:1-8.
35.
Attal Y, Bhattacharjee M, Yelnik J, Cottereau B, Lefèvre J, Okada Y, Bardinet E, Chupin M, Baillet S: Modeling and detecting deep brain activity with MEG & EEG. Conf Proc Annu Int Conf IEEE Eng Med Biol Soc IEEE Eng Med Biol Soc Annu Conf 2007;2007:4937-4940.
36.
Cohen D, Cuffin BN: Demonstration of useful differences between magnetoencephalogram and electroencephalogram. Electroencephalogr Clin Neurophysiol 1983;56:38-51.
Open Access License / Drug Dosage / Disclaimer
This article is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC BY-NC-ND). Usage and distribution for commercial purposes as well as any distribution of modified material requires written permission. Drug Dosage: The authors and the publisher have exerted every effort to ensure that drug selection and dosage set forth in this text are in accord with current recommendations and practice at the time of publication. However, in view of ongoing research, changes in government regulations, and the constant flow of information relating to drug therapy and drug reactions, the reader is urged to check the package insert for each drug for any changes in indications and dosage and for added warnings and precautions. This is particularly important when the recommended agent is a new and/or infrequently employed drug. Disclaimer: The statements, opinions and data contained in this publication are solely those of the individual authors and contributors and not of the publishers and the editor(s). The appearance of advertisements or/and product references in the publication is not a warranty, endorsement, or approval of the products or services advertised or of their effectiveness, quality or safety. The publisher and the editor(s) disclaim responsibility for any injury to persons or property resulting from any ideas, methods, instructions or products referred to in the content or advertisements.