Artificial intelligence (AI), currently a cutting-edge concept, has the potential to improve the quality of life of human beings. The fields of AI and biological research are becoming more intertwined, and methods for extracting and applying the information stored in live organisms are constantly being refined. As the field of AI matures with more trained algorithms, the potential of its application in epidemiology, the study of host-pathogen interactions and drug designing widens. AI is now being applied in several fields of drug discovery, customized medicine, gene editing, radiography, image processing and medication management. More precise diagnosis and cost-effective treatment will be possible in the near future due to the application of AI-based technologies. In the field of agriculture, farmers have reduced waste, increased output and decreased the amount of time it takes to bring their goods to market due to the application of advanced AI-based approaches. Moreover, with the use of AI through machine learning (ML) and deep-learning-based smart programs, one can modify the metabolic pathways of living systems to obtain the best possible outputs with the minimal inputs. Such efforts can improve the industrial strains of microbial species to maximize the yield in the bio-based industrial setup. This article summarizes the potentials of AI and their application to several fields of biology, such as medicine, agriculture, and bio-based industry.
The use of AI may make it simpler to identify potential targets in big genome data for genetic manipulation and design effective synthetic promoters in efforts to improve agronomic traits in plants [106,107]. The growing necessities for smart agriculture have resulted in substantial advancements in the area of AI-based agricultural forecasting and prediction, which has improved crop productivity to a great extent [93]. A similar attempt was made in a recent study where image datasets were analyzed by employing AI algorithms, namely ANN and genetic algorithm (GA)-based platforms, for the prediction of crop yield in an optimized manner [108]. During the training period, the model obtained a maximum validation accuracy of 98.19%, whereas a maximum accuracy of 97.75% was yielded during the test period [108]. This model worked effectively under limited resource restrictions and less data, producing optimal results [108]. In another significant study, a new methodology for predicting agricultural yield in greenhouse crops employing recurrent neural network (RNN) and temporal convolutional network (TCN) algorithms was proposed [109]. Based on previous environmental and production data, this approach can be utilized to estimate greenhouse crop yields more accurately than its standard ML and deep learning peers [109].
Image Processing By Dhananjay K
Before joining RTI, Dr. Pandit served as Technical Director with AECOM India, Director with DHI India, and as a scientist at various levels in Indian Space Research Organisation (ISRO) While working with ISRO, Dr. Pandit gained extensive experience in digital image processing of satellite remote sensing data, application of Geographic Information System (GIS) and satellite remote sensing for irrigation and water resources projects, as well as the development of DSS for water resources applications. His expertise includes the development and implementation of FF&EWS, ROS, flood modeling, river basin modeling, DSS for IWRM, Irrigation Information Systems, and remote sensing and GIS applications in water resources.
Image analysis for DRG sections. Three mice per genotype were imaged on an Olympus FV3000 confocal microscope at original magnification 20. One image was acquired of each mouse DRG section, and 3 sections were imaged per mouse (total: 9 images). The raw image files were brightened and contrasted equally in Olympus CellSens software and then analyzed manually 1 cell at a time for expression of Calca, P2rx3, and F2rl1. Cell diameters were measured using the polyline tool. NF200 signal (not shown), Calca, P2rx3, and F2rl1 were used to quantify the total neuronal population. Representative images of hind paw skin are shown from F2rl1floxPirt+/+ and F2rl1floxPirtCre mice with a negative control from an F2rl1floxPirt+/+ animal imaged at the same settings.
With the technological advancements of the modern era, the easy availability of image editing tools has dramatically minimized the costs, expense, and expertise needed to exploit and perpetuate persuasive visual tampering. With the aid of reputable online platforms such as Facebook, Twitter, and Instagram, manipulated images are distributed worldwide. Users of online platforms may be unaware of the existence and spread of forged images. Such images have a significant impact on society and have the potential to mislead decision-making processes in areas like health care, sports, crime investigation, and so on. In addition, altered images can be used to propagate misleading information which interferes with democratic processes (e.g., elections and government legislation) and crisis situations (e.g., pandemics and natural disasters). Therefore, there is a pressing need for effective methods for the detection and identification of forgeries. Various techniques are currently employed for the identification and detection of these forgeries. Traditional techniques depend on handcrafted or shallow-learning features. In traditional techniques, selecting features from images can be a challenging task, as the researcher has to decide which features are important and which are not. Also, if the number of features to be extracted is quite large, feature extraction using these techniques can become time-consuming and tedious. Deep learning networks have recently shown remarkable performance in extracting complicated statistical characteristics from large input size data, and these techniques efficiently learn underlying hierarchical representations. However, the deep learning networks for handling these forgeries are expensive in terms of the high number of parameters, storage, and computational cost. This research work presents Mask R-CNN with MobileNet, a lightweight model, to detect and identify copy move and image splicing forgeries. We have performed a comparative analysis of the proposed work with ResNet-101 on seven different standard datasets. Our lightweight model outperforms on COVERAGE and MICCF2000 datasets for copy move and on COLUMBIA dataset for image splicing. This research work also provides a forged percentage score for a region in an image.
Some research works have used various machine learning algorithms for forgery detection. Conventional machine learning (ML) algorithms like logistic regression, SVM, and K-means clustering consider every pixel of the image as an individual dimension, thereby formulating image classification as a geometry problem [18]. Images are converted into high-dimensional vectors, and classification boundaries are learned through these algorithms. Unfortunately, such algorithms are often unable to learn very complex boundaries, leading to poor performance in image classification. Few machine learning algorithms that use distance metrics, such as K-nearest neighbours and K-means clustering [19], are computationally expensive because they require large dimensional vector spaces.
The significant contributions of this research work are as follows:(i)Development of DL architecture for detection and identification of copy move and image splicing forgeries.(ii)Detection and identification of copy move and image splicing forgeries using Mask R-CNN with MobileNet V1, a lightweight network and computationally less expensive.(iii)Evaluation of Mask R-CNN with MobileNet V1 on seven different datasets such as COVERAGE [33], CASIA 1.0 [34], CASIA 2.0 [34], COLUMBIA [35], MICC F220 [36], MICC F600 [36], and MICC F2000 [36].(iv)Comparative analysis of the proposed work with ResNet-101 on different standard datasets.(v)Estimation of the percentage score for a region of a forged image using Mask R-CNN and MobileNet V1.
The research work in [37] uses CNN for detecting copy move and image splicing forgeries. For extracting features from patches, the CNN network had been pretrained on labeled images. The SVM model is then trained using the extricated features. The research work in [38] uses CNN along with a deconvolutional network for copy move forgery detection. The test image is divided into blocks, and then CNN is used to extract the features from these image blocks. Self-correlations between these blocks are then calculated. After that, the matched points between blocks are localized, and finally, the deconvolutional network reconstructs the forgery mask. This copy move forgery detection (CMFD) technique is more robust against postprocessing operations such as affine transformation, JPEG compression, and blurring.
The study in [39] uses Mask R-CNN and the Sobel filter for detection and localization of copy move and image splicing forgeries. Here the employed Sobel filter allows predicted masks to identify gradients that are close to those of the real mask.
The work in [40] uses six convolutional layers and three FC layers. Here batch normalization is used in all the convolution layers and dropout in the FC layers (except in the last layer). CoMoFoD and BOSSBase datasets are used for evaluation of this technique which achieves an accuracy of 95.97% and 94.26%, respectively, on these datasets. The research study in [41] uses various processes such as segmentation, feature extraction, and dense depth reconstruction, finally identifying the tampered area for copy move forgery detection. Here forged image is segmented with simple linear iterative clustering (SLIC). Then, from these segmented patches, features from various scales are extracted using VGG-16. These features are used to reconstruct the dense depth of the image pixel which aids in the matching of the forged and original region. After the reconstruction process, the ADM (adaptive patch matching) technique is applied to find out the matched regions. The majority of the suspicious regions are apparent at the end of this operation. During this process, the unforged regions are removed and the forged regions are visible. The MICC F220 dataset was used in the experiments, which achieves a precision of 98%, recall of 89.5%, F1-score of 92%, and accuracy of 95%. The main contribution of the research in [42] is the development of a CNN for categorizing images into two groups: authentic and forged. Image features are extracted and feature maps are created by the CNN. The CNN takes the average of the produced feature maps and searches for feature correspondences and dependencies. The trained CNN is then used to classify the images. This technique has been tested on MICC F220, MICC F2000, and MICC F600 datasets in a variety of copy move situations, including single and multiple cloning with varying cloning regions, and achieved 100% accuracy and zero log loss using 50 epochs. The earlier research work shows remarkable performance but suffers from a few challenges such as generalization issues due to significant reliance on training data and the necessity for suitable hyperparameter selection. To address this issue, the researchers proposed [43] two deep learning techniques: a custom design of architecture and a transfer learning model for copy move forgery detection. To address the challenge of generalization, different standard datasets were employed. In the custom design technique, five architectures were designed with different depths (architectures with convolution layers up to five with two FC layers were used). The second technique is transfer learning for which the VGG-16 pretrained model is used. The pretrained model (pretrained with VGG-16) differs from custom design model in terms of depth, the number of filters in the convolutional layers, the activation function, and the number of convolutional layers before the pooling layer. The VGG-16 model by transfer learning obtained metrics is around 10% higher than the model by custom design, but it required more inference time.
2ff7e9595c
Comments