-
Notifications
You must be signed in to change notification settings - Fork 0
/
index.json
1 lines (1 loc) · 11.9 KB
/
index.json
1
[{"authors":["admin"],"categories":null,"content":"I am an university lecturer in data science at university of Stirling. Previously, I was a lead computer vision researcher at Parisian startup Qopius. In 2019, i was awarded PhD degree in computer vision at Hubert Curien laboratory - Lyon university, under the supervision of Christophe Ducottet (Leader, Image Analysis and Understanding team). I was previously a MSc intern at Heriot-Watt university, Edinburgh in the school of engineering and physical science, Vision lab under the supervision of Neil M. Robertson (Director of research for speech, image and vision systems, Queen\u0026rsquo;s university of Belfast). In addition, i was awarded triple-degree 2-years european masters in Vision and Robotics (VIBOT) with Erasmus Mundus scholarship (2012-2014). Plus, I am the winner of 2D reflection symmetry competitions (among participants) in ICCV'17 workshop: detecting symmetry in the Wild.\n","date":-62135596800,"expirydate":-62135596800,"kind":"term","lang":"en","lastmod":-62135596800,"objectID":"2525497d367e79493fd32b198b28f040","permalink":"https://mawady.github.io/author/mohamed-elawady/","publishdate":"0001-01-01T00:00:00Z","relpermalink":"/author/mohamed-elawady/","section":"authors","summary":"I am an university lecturer in data science at university of Stirling. Previously, I was a lead computer vision researcher at Parisian startup Qopius. In 2019, i was awarded PhD degree in computer vision at Hubert Curien laboratory - Lyon university, under the supervision of Christophe Ducottet (Leader, Image Analysis and Understanding team).","tags":null,"title":"Mohamed Elawady","type":"authors"},{"authors":["Mohamed Elawady","Christophe Ducottet","Olivier Alata","Cecile Barat","Phillipe Colantoni"],"categories":null,"content":"","date":1507694746,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1507694746,"objectID":"cab81cbb2afb7a4dfbeb150d90e1bf69","permalink":"https://mawady.github.io/publication/sym1/","publishdate":"2017-10-11T06:05:46+02:00","relpermalink":"/publication/sym1/","section":"publication","summary":"Symmetry is one of the significant visual properties inside an image plane, to identify the geometrically balanced structures through real-world objects. Existing symmetry detection methods rely on descriptors of the local image features and their neighborhood behavior, resulting incomplete symmetrical axis candidates to discover the mirror similarities on a global scale. In this paper, we propose a new reflection symmetry detection scheme, based on a reliable edge-based feature extraction using Log-Gabor filters, plus an efficient voting scheme parameterized by their corresponding textural and color neighborhood information. Experimental evaluation on four single-case and three multiple-case symmetry detection datasets validates the superior achievement of the proposed work to find global symmetries inside an image.","tags":[],"title":"Wavelet-based Reflection Symmetry Detection via Textural and Color Histograms","type":"publication"},{"authors":["Mohamed Elawady","Olivier Alata","Christophe Ducottet","Cecile Barat","Phillipe Colantoni"],"categories":null,"content":"","date":1502501794,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1502501794,"objectID":"73ee2c7bd8a7efc160d9a03fb8660d54","permalink":"https://mawady.github.io/publication/sym-kde/","publishdate":"2017-08-12T03:36:34+02:00","relpermalink":"/publication/sym-kde/","section":"publication","summary":"Symmetry is an important composition feature by investigating similar sides inside an image plane. It has a crucial effect to recognize man-made or nature objects within the universe. Recent symmetry detection approaches used a smoothing kernel over different voting maps in the polar coordinate system to detect symmetry peaks, which split the regions of symmetry axis candidates in inefficient way. We propose a reliable voting representation based on weighted linear-directional kernel density estimation, to detect multiple symmetries over challenging real-world and synthetic images. Experimental evaluation on two public datasets demonstrates the superior performance of the proposed algorithm to detect global symmetry axes respect to the major image shapes.","tags":[],"title":"Multiple Reflection Symmetry Detection via Linear-Directional Kernel Density Estimation","type":"publication"},{"authors":["Ibrahim Sadek","Mohamed Elawady","Abd El Rahman Shabayek"],"categories":null,"content":"","date":1499825539,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1499825539,"objectID":"977d60b87d488b424698b65a4a37daa5","permalink":"https://mawady.github.io/publication/ret-cls/","publishdate":"2017-07-12T04:12:19+02:00","relpermalink":"/publication/ret-cls/","section":"publication","summary":"The diabetic retinopathy is timely diagonalized through color eye fundus images by experienced ophthalmologists, in order to recognize potential retinal features and identify early-blindness cases. In this paper, it is proposed to extract deep features from the last fully-connected layer of, four different, pre-trained convolutional neural networks. These features are then feeded into a non-linear classifier to discriminate three-class diabetic cases, i.e., normal, exudates, and drusen. Averaged across 1113 color retinal images collected from six publicly available annotated datasets, the deep features approach perform better than the classical bag-of-words approach. The proposed approaches have an average accuracy between 91.23% and 92.00% with more than 13% improvement over the traditional state of art methods.","tags":[],"title":"Automatic Classification of Bright Retinal Lesions via Deep Network Features","type":"publication"},{"authors":["Mohamed Elawady","Christophe Ducottet","Cecile Barat","Phillipe Colantoni"],"categories":null,"content":"","date":1476237380,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1476237380,"objectID":"bb4e75b24d5f1619dd1b9f2117d3e0a6","permalink":"https://mawady.github.io/publication/sym-gbt/","publishdate":"2016-10-12T03:56:20+02:00","relpermalink":"/publication/sym-gbt/","section":"publication","summary":"In recent years, there has been renewed interest in bilateral symmetry detection in images. It consists in detecting the main bilateral symmetry axis inside artificial or natural images. State-of-the-art methods combine feature point detection, pairwise comparison and voting in Hough-like space. In spite of their good performance, they fail to give reliable results over challenging real-world and artistic images. In this paper, we propose a novel symmetry detection method using multi-scale edge features combined with local orientation histograms. An experimental evaluation is conducted on public datasets plus a new aesthetic-oriented dataset. The results show that our approach outperforms all other concurrent methods.","tags":[],"title":"Global bilateral symmetry detection using multiscale mirror histograms","type":"publication"},{"authors":["Mohamed Elawady","Ibrahim Sadek","Abd El Rahman Shabayek","Gerard Pons","Sergi Ganau"],"categories":null,"content":"","date":1465697025,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1465697025,"objectID":"3eb17913d57c0bf496b3a4ccfd1f9adc","permalink":"https://mawady.github.io/publication/bus-seg/","publishdate":"2016-06-12T04:03:45+02:00","relpermalink":"/publication/bus-seg/","section":"publication","summary":"Breast cancer is one of the leading causes of cancer death among women worldwide. The proposed approach comprises three steps as follows. Firstly, the image is preprocessed to remove speckle noise while preserving important features of the image. Three methods are investigated, i.e., Frost Filter, Detail Preserving Anisotropic Diffusion, and Probabilistic Patch-Based Filter. Secondly, Normalized Cut or Quick Shift is used to provide an initial segmentation map for breast lesions. Thirdly, a postprocessing step is proposed to select the correct region from a set of candidate regions. This approach is implemented on a dataset containing 20 B-mode ultrasound images, acquired from UDIAT Diagnostic Center of Sabadell, Spain. The overall system performance is determined against the ground truth images. The best system performance is achieved through the following combinations: Frost Filter with Quick Shift, Detail Preserving Anisotropic Diffusion with Normalized Cut and Probabilistic Patch-Based with Normalized Cut.","tags":[],"title":"Automatic Nonlinear Filtering and Segmentation for Breast Ultrasound Images","type":"publication"},{"authors":["Mohamed Elawady"],"categories":null,"content":"","date":1402537884,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1402537884,"objectID":"012dfe350272e04ec7be274bdcce51e9","permalink":"https://mawady.github.io/publication/coral-msc/","publishdate":"2014-06-12T03:51:24+02:00","relpermalink":"/publication/coral-msc/","section":"publication","summary":"Autonomous repair of deep-sea coral reefs is a recent proposed idea to support the oceans ecosystem in which is vital for commercial fishing, tourism and other species. This idea can be operated through using many small autonomous underwater vehicles (AUVs) and swarm intelligence techniques to locate and replace chunks of coral which have been broken off, thus enabling re-growth and maintaining the habitat. The aim of this project is developing machine vision algorithms to enable an underwater robot to locate a coral reef and a chunk of coral on the seabed and prompt the robot to pick it up. Although there is no literature on this particular problem, related work on fish counting may give some insight into the problem. The technical challenges are principally due to the potential lack of clarity of the water and platform stabilization as well as spurious artifacts (rocks, fish, and crabs). We present an efficient sparse classification for coral species using supervised deep learning method called Convolutional Neural Networks (CNNs). We compute Weber Local Descriptor (WLD), Phase Congruency (PC), and Zero Component Analysis (ZCA) Whitening to extract shape and texture feature descriptors, which are employed to be supplementary channels (feature-based maps) besides basic spatial color channels (spatial-based maps) of coral input image, we also experiment state-of-art preprocessing underwater algorithms for image enhancement and color normalization and color conversion adjustment. Our proposed coral classification method is developed under MATLAB platform, and evaluated by two different coral datasets (University of California San Diego's Moorea Labeled Corals, and Heriot-Watt University's Atlantic Deep Sea).","tags":[],"title":"Sparse Coral Classification Using Deep Convolutional Neural Networks","type":"publication"},{"authors":["Mohamed Elawady","Ibrahim Sadek","Hiliwi Kidane"],"categories":null,"content":"","date":1389495560,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1389495560,"objectID":"462b25b3e12443f6df0f61ade6c02558","permalink":"https://mawady.github.io/publication/drn-obs/","publishdate":"2014-01-12T04:59:20+02:00","relpermalink":"/publication/drn-obs/","section":"publication","summary":"In literature, several approaches are trying to make the UAVs fly autonomously i.e., by extracting perspective cues such as straight lines. However, it is only available in well-defined human made environments, in addition to many other cues which require enough texture information. Our main target is to detect and avoid frontal obstacles from a monocular camera using a quad rotor Ar.Drone 2 by exploiting optical flow as a motion parallax, the drone is permitted to fly at a speed of 1 meters per second and an altitude ranging from 1 to 4 meters above the ground level. In general, detecting and avoiding frontal obstacle is a quite challenging problem because optical flow has some limitation which should be taken into account i.e. lighting conditions and aperture problem.","tags":[],"title":"Detecting and avoiding frontal obstacles from monocular camera for micro unmanned aerial vehicles","type":"publication"}]