Seismic coherence is a routine measure of seismic reflection similarity for interpreters seeking structural boundary and discontinuity features that may be not properly highlighted on original amplitude volumes. One mostly wishes to use the broadest band seismic data for interpretation. However, because of thickness tuning effects, spectral components of specific frequencies can highlight features of certain thicknesses with higher signal-to-noise ratio than others. Seismic stratigraphic features may be buried in the full-bandwidth data, but can be “lit up” at certain spectral (...) components. For the same reason, coherence attributes computed from spectral voice components also often provide sharper images, with the “best” component being a function of the tuning thickness and the reflector alignment across faults. Although one can corender three coherence images using red-green-blue blending, a display of the information contained in more than three volumes in a single image is difficult. We address this problem by combining covariance matrices for each spectral component, adding them together, resulting in a “multispectral” coherence algorithm. The multispectral coherence images provide better images of channel incisement, and they are less noisy than those computed from the full bandwidth data. In addition, multispectral coherence also provides a significant advantage over RGB blended volumes. The information content from unlimited spectral voices can be combined into one volume, which is useful for a posteriori/further processing, such as color corendering display with other related attributes, such as petrophysics parameters plotted against a polychromatic color bar. We develop the value of multispectral coherence by comparing it with the RGB blended volumes and coherence computed from spectrally balanced, full-bandwidth seismic amplitude volume from a megamerge survey acquired over the Red Fork Formation of the Anadarko Basin, Oklahoma. (shrink)
Recent developments in seismic attributes and seismic facies classification techniques have greatly enhanced the capability of interpreters to delineate and characterize features that are not prominent in conventional 3D seismic amplitude volumes. The use of appropriate seismic attributes that quantify the characteristics of different geologic facies can accelerate and partially automate the interpretation process. Self-organizing maps are a popular seismic facies classification tool that extract similar patterns embedded with multiple seismic attribute volumes. By preserving the distance in the input data (...) space into the SOM latent space, the internal relation among data vectors on an SOM facies map is better presented, resulting in a more reliable classification. We have determined the effectiveness of the modified algorithm by applying it to a turbidite system in Canterbury Basin, offshore New Zealand. By incorporating seismic attributes and distance-preserving SOM classification, we were able to observe architectural elements that are overlooked when using a conventional seismic amplitude volume for interpretation. (shrink)
In general, we wish to interpret the most broadband data possible. However, broadband data do not always provide the best insight for seismic attribute analysis. Obviously, spectral bands contaminated by noise should be eliminated. However, tuning gives rise to spectral bands with higher signal-to-noise ratios. To quantify geologic discontinuities in different scales, we combined spectral decomposition and coherence. Using spectral decomposition, the spectral amplitudes corresponding to a given scale geologic discontinuity, as well as some subtle features, which would otherwise be (...) buried within the broadband seismic response, can be extracted. We applied this workflow to a 3D land data volume acquired over the Tarim Basin, Northwest China, where karst forms the principle reservoirs. We found that channels are better illuminated around 18 Hz, while subtle discontinuities were better delineated around 25 Hz. (shrink)
One of the key components of traditional seismic interpretation is to associate or “label” a specific seismic amplitude package of reflectors with an appropriate seismic or geologic facies. The object of seismic clustering algorithms is to use a computer to accelerate this process, allowing one to generate interpreted facies for large 3D volumes. Determining which attributes best quantify a specific amplitude or morphology component seen by the human interpreter is critical to successful clustering. Unfortunately, many patterns, such as coherence images (...) of salt domes, result in a salt-and-pepper classification. Application of 3D Kuwahara median filters smooths the interior attribute response and sharpens the contrast between neighboring facies, thereby preconditioning the attribute volumes for subsequent clustering. In our workflow, the interpreter manually painted [Formula: see text] target facies using traditional interpretation techniques, resulting in attribute training data for each facies. Candidate attributes were evaluated by crosscorrelating their histogram for each facies with low correlation implying good facies discrimination, and Kuwahara filtering significantly increased this discrimination. Multiattribute voxels for the [Formula: see text] interpreter-painted facies were projected against a generative topographical mapping manifold, resulting in [Formula: see text] probability density functions. The Bhattacharyya distance between the PDF of each unlabeled voxel to each of [Formula: see text] facies PDFs resulted in a probability volume of each user-defined facies. We have determined the effectiveness of this workflow to a large 3D seismic volume acquired offshore Louisiana, USA. (shrink)
Seismic coherence is commonly used to delineate structural and stratigraphic discontinuities. We generally use full-bandwidth seismic data to calculate coherence. However, some seismic stratigraphic features may be buried in this full-bandwidth data but can be highlighted by certain spectral components. Due to thin-bed tuning phenomena, discontinuities in a thicker stratigraphic feature may be tuned and thus better delineated at a lower frequency, whereas discontinuities in the thinner units may be tuned and thus better delineated at a higher frequency. Additionally, whether (...) due to the seismic data quality or underlying geology, certain spectral components exhibit higher quality over other components, resulting in correspondingly higher quality coherence images. Multispectral coherence provides an effective tool to exploit these observations. We have developed the performance of multispectral coherence using different spectral decomposition methods: the continuous wavelet transform, maximum entropy, amplitude volume technique, and spectral probe. Applications to a 3D seismic data volume indicate that multispectral coherence images are superior to full-bandwidth coherence, providing better delineation of incised channels with less noise. From the CWT experiments, we find that providing exponentially spaced CWT components provides better coherence images than equally spaced components for the same computation cost. The multispectral coherence image computed using maximum entropy spectral voices further improves the resolution of the thinner channels and small-scale features. The coherence from AVT data set provides continuous images of thicker channel boundaries but poor images of the small-scale features inside the thicker channels. Additionally, multispectral coherence computed using the nonlinear spectral probes exhibits more balanced and reveals clear small-scale geologic features inside the thicker channel. However, because amplitudes are not preserved in the nonlinear spectral probe decomposition, noise in the noisier shorter period components has an equal weight when building the covariance matrix, resulting in increased noise in the generated multispectral coherence images. (shrink)
Seismic attenuation, generally related to the presence of hydrocarbon accumulation, fluid-saturated fractures, and rugosity, is extremely useful for reservoir characterization. The classic constant attenuation estimation model, focusing on intrinsic attenuation, detects the seismic energy loss because of the presence of hydrocarbons, but it works poorly when spectral anomalies exist, due to rugosity, fractures, thin layers, and so on. Instead of trying to adjust the constant attenuation model to such phenomena, we have evaluated a suite of seismic spectral attenuation attributes to (...) quantify the apparent attenuation responses. We have applied these attributes to a conventional and an unconventional reservoir, and we found that those seismic attenuation attributes were effective and robust for seismic interpretation. Specifically, the spectral bandwidth attribute correlated with the production of a gas sand in the Anadarko Basin, whereas the spectral slope of high frequencies attribute correlated with the production in the Barnett Shale of the Fort Worth Basin. (shrink)
Pattern recognition-based seismic facies analysis techniques are commonly used in modern quantitative seismic interpretation. However, interpreters often treat techniques such as artificial neural networks and self-organizing maps as a “black box” that somehow correlates a suite of attributes to a desired geomorphological or geomechanical facies. Even when the statistical correlations are good, the inability to explain such correlations through principles of geology or physics results in suspicion of the results. The most common multiattribute facies analysis begins by correlating a suite (...) of candidate attributes to a desired output, keeping those that correlate best for subsequent analysis. The analysis then takes place in attribute space rather than space, removing spatial trends often observed by interpreters. We add a stratigraphy layering component to a SOM model that attempts to preserve the intersample relation along the vertical axis. Specifically, we use a mode decomposition algorithm to capture the sedimentary cycle pattern as an “attribute.” If we correlate this attribute to the training data, it will favor SOM facies maps that follow stratigraphy. We apply this workflow to a Barnett Shale data set and find that the constrained SOM facies map shows layers that are easily overlooked on traditional unconstrained SOM facies map. (shrink)
Analyzing the time-frequency features of seismic traces plays an important role in seismic stratigraphy analysis and hydrocarbon detection. The current popular time-spectrum analysis methods include short-time Fourier transform, continuous wavelet transform, S-transform, and matching pursuit, among which MP is the most tolerant of the window/scalar effect. However, current MP algorithms do not consider the interfering effects of seismic events on the estimation of optimal wavelets in each decomposition iteration. The interfered reflection seismic events may result in inaccurate estimation of optimal (...) wavelets during the whole decomposition procedure. We have developed a hybrid basis MP workflow to minimize the effect of event interference on the estimation of optimal wavelets. Our algorithm assumes that the wavelet features remain constant in a user-defined small time window. The algorithm begins with identifying the strongest reflection waveform. Next, we estimate the optimal wavelet and the corresponding reflectivity model for the selected waveform by using a basis pursuit algorithm. Then, we subtract the seismic traces from the waveform computed from the optimal wavelet and estimated reflectivity model. We repeat this procedure until the total energy of seismic traces falls below a user-defined value. We have determined the effectiveness of our algorithm by first applying it to a synthetic model and then to a real seismic data set. (shrink)
Well-log correlation is a crucial step to construct cross sections in estimating structures between wells and building subsurface models. Manually correlating multiple logs can be highly subjective and labor intensive. We have developed a weighted incremental correlation method to efficiently correlate multiple well logs following a geologically optimal path. In this method, we first automatically compute an optimal path that starts with longer logs and follows geologically continuous structures. Then, we use the dynamic warping technique to sequentially correlate the logs (...) following the path. To avoid potential error propagation with the path, we modify the dynamic warping algorithm to use all the previously correlated logs as references to correlate the current log in the path. During the sequential correlations, we compute the geologic distances between the current log and all of the reference logs. Such distances are proportional to Euclidean distances, but they increase dramatically across discontinuous structures such as faults and unconformities that separate the current log from the reference logs. We also compute correlation confidences to provide quantitative quality control of the correlation results. We use the geologic distances and correlation confidences to weight the references in correlating the current log. By using this weighted incremental correlation method, each log is optimally correlated with all the logs that are geologically closer and are ordered with higher priorities in the path. Hundreds of well logs from the Teapot Dome survey demonstrate the efficiency and robustness of the method. (shrink)
Well-log correlation is a crucial step to construct cross sections in estimating structures between wells and building subsurface models. Manually correlating multiple logs can be highly subjective and labor intensive. We have developed a weighted incremental correlation method to efficiently correlate multiple well logs following a geologically optimal path. In this method, we first automatically compute an optimal path that starts with longer logs and follows geologically continuous structures. Then, we use the dynamic warping technique to sequentially correlate the logs (...) following the path. To avoid potential error propagation with the path, we modify the dynamic warping algorithm to use all the previously correlated logs as references to correlate the current log in the path. During the sequential correlations, we compute the geologic distances between the current log and all of the reference logs. Such distances are proportional to Euclidean distances, but they increase dramatically across discontinuous structures such as faults and unconformities that separate the current log from the reference logs. We also compute correlation confidences to provide quantitative quality control of the correlation results. We use the geologic distances and correlation confidences to weight the references in correlating the current log. By using this weighted incremental correlation method, each log is optimally correlated with all the logs that are geologically closer and are ordered with higher priorities in the path. Hundreds of well logs from the Teapot Dome survey demonstrate the efficiency and robustness of the method. (shrink)
Characterization of seismic geologic structures, such as describing fluvial channels and geologic faults, is significant for seismic reservoir prediction. The coherence algorithm is one of the widely used techniques for describing discontinuous seismic geologic structures. However, precise coherence attributes between adjacent seismic traces are difficult to compute due to the nonstationary and non-Gaussian property of seismic data. To describe seismic geologic structures accurately, we define a high-order spectrum-coherence attribute. First, we have developed a time-frequency analysis method to compute a constant-frequency (...) seismic volume with high TF resolution, i.e., the second-order synchrosqueezing wave packet transform. Then, we developed a coherence approach by combining the mutual information calculation and coherence algorithm based on the eigenvalue computation. To improve computational efficiency, we adopt the information divergence instead of the eigenvalue calculation of the C3-based algorithms. By applying our coherence algorithm to constant-frequency seismic volumes, we obtain the HOSC attribute. To test the validity of the proposed workflow, we evaluate the HOSC attribute using synthetic data. After applying our workflow to 3D real seismic data located in eastern China, the HOSC attribute characterizes seismic geologic discontinuities and subtle features clearly and accurately, such as fluvial channels and subtle faults. (shrink)
Subtle variations in otherwise similar seismic data can be highlighted in specific spectral components. Our goal is to highlight repetitive sequence boundaries to help define the depositional environment, which in turn provides an interpretation framework. Variational mode decomposition is a novel data-driven signal decomposition method that provides several useful features compared with the commonly used time-frequency analysis. Rather than using predefined spectral bands, the VMD method adaptively decomposes a signal into an ensemble of band-limited intrinsic mode functions, each with its (...) own center frequency. Because it is data adaptive, modes can vary rapidly between neighboring traces. We address this shortcoming of previous work by constructing a laterally consistent VMD method that preserves lateral continuity, facilitating the extraction of subtle depositional patterns. We validate the accuracy of our method using a synthetic depositional cycle example, and then we apply it to identify seismic sequence stratigraphy boundaries for a survey acquired in the Dutch sector, North Sea. (shrink)
Seismic data with enhanced resolution allow interpreters to effectively delineate and interpret architectural components of stratigraphically thin geologic features. We used a recently developed time-frequency domain deconvolution method to spectrally balance nonstationary seismic data. The method was based on polynomial fitting of seismic wavelet magnitude spectra. The deconvolution increased the spectral bandwidth but did not amplify random noise. We compared our new spectral modeling algorithm with existing time-variant spectral-whitening and inverse [Formula: see text]-filtering algorithms using a 3D offshore survey acquired (...) over Bohai Gulf, China. We mapped these improvements spatially using a suite of 3D volumetric coherence, energy, curvature, and frequency attributes. The resulting images displayed improved lateral resolution of channel edges and fault edges with few, if any artifacts associated with amplification of random noise. (shrink)
Seismic interpreters frequently use seismic geometric attributes, such as coherence, dip, curvature, and aberrancy for defining geologic features, including faults, channels, angular unconformities, etc. Some of the commonly used coherence attributes, such as cross correlation or energy-ratio similarity, are sensitive to only waveform shape changes, whereas the dip, curvature, and aberrancy attributes are based on changes in reflector dips. There is another category of seismic attributes, which includes attributes that are sensitive to amplitude values. Root-mean-square amplitude is one of the (...) better-known amplitude-based attributes, whereas coherent energy, Sobel-filter similarity, normalized amplitude gradients, and amplitude curvature are among lesser-known amplitude-based attributes. We have computed not-so-common amplitude-based attributes on the Penobscot seismic survey from the Nova Scotia continental shelf consisting of the east coast of Canada, to bring out their interpretive value. We analyze seismic attributes at the level of the top of the Wyandot Formation that exhibits different geologic features, including a synthetic transfer zone with two primary faults and several secondary faults, polygonal faults associated with differential compaction, as well as fixtures related to basement-related faults. The application of the amplitude-based seismic attributes defines such features accurately. We take these applications forward by describing a situation in which some geologic features do not display any bending of reflectors but only exhibit changes in amplitude. One such example is the Cretaceous Cree Sand channels present in the same 3D seismic survey used for the previous applications. We compute amplitude curvature attributes and identify the channels, whereas these channels are not visible on the structural curvature display. In both of the applications, we observe that appropriate corendering not-so-common amplitude-based seismic attributes lead to convincing displays, which can be of immense aid in seismic interpretation and help define the different subsurface features with more clarity. (shrink)
More recently, the computation of seismic fault attribute that may be significant in seismic interpretation is that seismic fault detection is treated as an image segmentation problem using different deep learning architectures. For doing this, the researchers have been concentrated on applying cutting-edge DL architectures in computing seismic fault attribute. To explore the factors that may affect the accuracy of seismic fault attribute, we compare the computed fault probability using DL architectures under different scenarios. The designed scenarios aim to highlight (...) the leading factors that may affect the accuracy and resolution of seismic image segmentation. The discussed factors include the dimension and size of training data, training data preparation, ensemble learning, batch size in deep learning. The proposed comparisons are applied to one marine seismic survey from New Zealand and one land seismic survey from China. The results demonstrate that properly preparing training data is far more important than choosing a cutting-edge DL architecture in computing seismic fault attribute. We also propose a practical workflow that can include real seismic data and corresponding interpreted fault sticks in training data for a specific seismic survey. (shrink)
Infant language learners are faced with the difficult inductive problem of determining how new words map to novel or known objects in their environment. Bayesian inference models have been successful at using the sparse information available in natural child-directed speech to build candidate lexicons and infer speakers’ referential intentions. We begin by asking how a Bayesian model optimized for monolingual input generalizes to new monolingual or bilingual corpora and find that, especially in the case of the bilingual input, the model (...) shows a significant decrease in performance. In the next experiment, we propose the ME Model, a modified Bayesian model, which approximates infants’ mutual exclusivity bias to support the differential demands of monolingual and bilingual learning situations. The extended model is assessed using the same corpora of real child-directed speech, showing that its performance is more robust against varying input and less dependent than the Intentional Model on optimization of its parsimony parameter. We argue that both monolingual and bilingual demands on word learning are important considerations for a computational model, as they can yield significantly different results than when only one such context is considered. (shrink)
Background: High ethical sensitivity positively affects the quality of nursing care; nevertheless, Chinese nurses’ ethical sensitivity and the factors influencing it have not been described. Research objectives: The purpose of this study was to describe ethical sensitivity and to explore factors influencing it among Chinese-registered nurses, to help nursing administrators improve nurses’ ethical sensitivity, build harmony between nurses and patients, and promote the patients’ health. Research design: This was a descriptive, cross-sectional study. Participants and research context: We recruited 500 nurses (...) from several departments in three tertiary hospitals. The Chinese Moral Sensitivity Questionnaire–Revised version and the Jefferson Scale of Empathy-Health Professionals were used to assess the nurses’ ethical sensitivity and empathy ability, respectively. Fifteen sociodemographic variables were included in the questionnaires. Ethical considerations: Informed consent was obtained from the participants regarding participation and data storage and handling. This program has been examined and supported by the research center of medical ethics and professional ethics of Guilin Medical University. The Approval No. was 2016RWYB04. The whole research process is conducted strictly according to ethical requirements. Results: The valid response rate was 84.40%. The total score of Chinese Moral Sensitivity Questionnaire–Revised was 35.82 ± 8.17. The subscale scores of moral responsibility and strength and sense of moral burden were 21.50 ± 4.91 and 14.33 ± 3.98, respectively. Significant differences were found among age groups, gender, years of working, category of profession, and quality of family communication regarding nurses’ ethical sensitivity. Regression analysis showed that the main factors influencing nurses’ ethical sensitivity were gender, years of working, quality of family communication, career satisfaction, and empathic ability. Discussion: Our findings suggest that Chinese nurses’ ethical sensitivity in tertiary hospitals in Guilin is at a medium level. Conclusion: The director of nursing schools and hospitals in China should pay attention to nurses’ ethical sensitivity and should intensify education and training to improve nurses’ ethical sensitivity. Further studies should focus on interventions aimed at improving Chinese nurses’ ethical sensitivity. (shrink)
Seismic attenuation analysis is important for seismic processing and quantitative interpretation. Nevertheless, classic quality factor estimation methods make certain assumptions that may be invalid for a given geologic target and seismic volume. For this reason, seismic attenuation attribute analysis, which reduces some of the theoretical assumptions, can serve as a practical alternative in apparent attenuation characterization. Unfortunately, most of the published literature defines seismic attenuation attributes based on a specific source wavelet assumption, such as the Ricker wavelet, rather than wavelets (...) that exhibit the relatively flat spectrum produced by modern data processing workflows. One of the most common processing steps is to spectrally balance the data either explicitly in the frequency domain or implicitly through wavelet shaping deconvolution. If the poststack seismic data have gone through spectral balancing/whitening to improve their seismic resolution, the wavelet would exhibit a flat spectrum instead of a Ricker or Gaussian shape. We have addressed the influence of spectral balancing on seismic attenuation analysis. Our mathematical analysis shows that attenuation attributes are still effective for poststack seismic data after certain types of spectral balancing. More importantly, this analysis explains why seismic attenuation attributes work for real seismic applications with common seismic processing procedures. Synthetic and field data examples validate our conclusions. (shrink)