Dominique MICHELUCCI’s publications


Last update: December 2022

My orcid page    my dblp page    my GoogleScholar page


2022


Quantized random projections of SIFT features for cancelable fingerprints

DJEBLI, Hamza, AIT-AOUDIA, Samy, et MICHELUCCI, Dominique. Quantized random projections of SIFT features for cancelable fingerprints. Multimedia Tools and Applications, 2022, p. 1-21.

Abstract. Biometric recognition, particularly fingerprint, has been widely adopted in many civil and military applications. However, security concerns have arisen regarding the protection of the saved biometric templates. If the fingerprints database is compromised, the person can no longer use his/her registered fingerprint for authentication. Cancelable biometrics have been used to overcome this issue, by transforming fingerprint templates to a secure representation before saving them on the database. The identity verification is performed in the transformed domain, and a new transform is assigned to a user if his/her biometric data is compromised. In this context, we propose a unique cancelable fingerprint scheme based on the extraction of Scale Invariant Feature Transform (SIFT) features from fingerprint minutiae positions. To provide cancelability, SIFT features are transformed by applying user-specific random projections, followed by quantization to ensure irreversibility. We successfully achieved a zero Equal Error Rate (EER) for the Fingerprint Verification Competition (FVC) 2002 DB1 benchmark in the different-key scenario, and 1.78 EER in the stolen-key scenario. We compare our results with state-of-the-art methods based on the EER metric in the stolen-key scenario. Genuine and impostor distributions have also been used for performance and security assessment of the proposed method. Moreover, we demonstrate the robustness of the proposed approach against brute force attacks.

Springer link ResearchGate link


Fast Fourier transform for fingerprint enhancement and features extraction...

Amina AIT-AOUDIA, Dominique MICHELUCCI, Youcef ZAFOUNE. Fast Fourier transform for fingerprint enhancement and features extraction with adaptive block size using orientation fields. ITM Web Conf. Volume 43, 2022. The International Conference on Artificial Intelligence and Engineering 2022 (ICAIE’2022). DOI:https://doi.org/10.1051/itmconf/20224301014

Keywords: Minutiae, fingerprint enhancement, orientation field, quadtree decomposition, FFT

Abstract. Extracting minutiae from a digital fingerprint is a crucial step in a fingerprint-based recognition systems. This work deals with poor-quality fingerprint images containing broken ridges. The enhancement stage connects broken ridges and is essential for extracting correct minutiae. We use a FFT variant [8] for this stage, but, to truly benefit from FFT in a block, it is essential to determine a suitable block size, depending on ridges orientation field. We propose to use a quadtree to partition the ridges orientation field into homogeneous blocks. A block is homogeneous when at least seventy percent of its ridges orientations are within ten degrees. Another issue addressed in this article is the choice of a suitable neighborhood window size W for computing orientation field image, depending on the fingerprint image quality. The performance improvements of our algorithm are evaluated and compared with standard measures MSE, PSNR and GI, on databases DB1 to DB4 of FVC2004 and NIST special database SD302d.

 LINK    A copy.   Another copy.


QPert: Query Perturbation to improve shape retrieval algorithms

Abdelhakim Benkrama1, Bilal Mokhtari, Kamal Eddine Melkemi1, Sebti Foufou, Omar Boudraa and Dominique Michelucci. QPert: Query Perturbation to improve shape retrieval algorithms. Submitted to publication.

Given a particular shape, called the query, shape retrieval algorithms use shape descriptors to compute the similarity between a query shape and a set of database shapes in order to find those that are very similar to the query. Although there is a wide range of descriptors available in the liter- ature, most of them are restricted to a specific class of shapes and no one can achieve satisfactory results when used with different classes of shapes. Introducing new descriptors, improving, or merging existing descrip- tors are potential strategies for enhancing shape retrieval algorithms. In this paper, we address the problem from a different angle. Indeed, instead of refining the descriptors the proposed approach, called QPert, slightly perturbs the query to create copies (or clones) that are closer than the query itself to the database shapes. Clones are created by adding a small noise to the coordinates of a randomly selected sub- set of mesh vertices or applying genetic operators (e.g., crossover) between existing clones. A Genetic Algorithm (GA) gradually devel- ops a population of clones so that the fittest clones get closer and closer to their most similar shapes in the database. The GA is imple- mented as a multiagent system (MAS) that enables any number of shape descriptors, classic or modern (i.e., machine learning based), to cooperate without the need for synchronization or direct communication between agents. Experimental results and comparisons demonstrate the advantages of this approach, regardless of the used shape descriptors.

Keywords: 3D shape retrieval , shape descriptors , shapes dissimilarity measures , perturbation , noise-enhanced shape retrieval , genetic algorithms , multiagent system


PUNet: Novel and efficient deep neural network architecture for handwritten documents word spotting

Omar BOUDRAA, Dominique MICHELUCCI, Walid Khaled HIDOUCI. PUNet: Novel and efficient deep neural network architecture for handwritten documents word spotting. Pattern Recognition Letters Volume 155, March 2022, Pages 19-26. https://doi.org/10.1016/j.patrec.2022.01.019 (CiteScore: 6.7 & impact factor: 3.756).

Keywords: Deep learning, Word spotting, Information retrieval, Transfer learning, Convolutional neural network, Pyramidal histogram of characters

Highlights:

– An original Deep Learning-based system is proposed.

– We adapt efficient U-Net architecture to the Word Spotting topic.

– We make use of enhanced PHOC encoding for Word Spotting and Clustering.

– No Preprocessing or Data augmentation is needed.

– We achieved better results in both one-writer and multiple-writers handwritten documents than prior works.

Abstract. Ancient scripts, documents and photos are valuable supports that opened a window to humanity’s past, its social and cultural treasure. However, due to the inevitable deterioration, seeking information in these documents remains a key challenge. Word Spotting remains an attractive alternative to Optical Character Recognition systems for information retrieval, especially in the case of old documents. The quantity of research work on this particular topic continues growing by extracting relevant words as an alternative method of recognizing text content. Here, we propose a new robust Deep Learning based framework for data exploration of ancient documents by applying Transfer Learning from the U-Net Network and using recent Pyramidal Histogram of Character Encryption: no special preprocessing or Data augmentation optimizers are required. Experimental results reveal its accuracy and high capacity to well classify words of various handwritten datasets.

LINK


2021


An Efficient Cooperative Smearing Technique for Degraded Historical Document Image Segmentation

BOUDRAA, Omar, HIDOUCI, Walid Khaled, et MICHELUCCI, Dominique. An Efficient Cooperative Smearing Technique for Degraded Historical Document Image Segmentation. International Journal of Image and Graphics, 2021, vol. 21, no 02, p. 2150012. (H-Index : 20)

LINK

Abstract. Segmentation is one of the critical steps in historical document image analysis systems that determines the quality of the search, understanding, recognition and interpretation processes. It allows isolating the objects to be considered and separating the regions of interest (paragraphs, lines, words and characters) from other entities (figures, graphs, tables, etc.). This stage follows the thresholding, which aims to improve the quality of the document and to extract its background from its foreground, also for detecting and correcting the skew that leads to redress the document. Here, a hybrid method is proposed in order to locate words and characters in both handwritten and printed documents. Numerical results prove the robustness and the high precision of our approach applied on old degraded document images over four common datasets, in which the pair (Recall, Precision) reaches approximately 97.7

Keywords: Historical document image segmentationtop-down segmentationbottom-up segmentationhybrid approachconnected components analysis.


2020


Hidden Markov random fields and cuckoo search method for medical image segmentation

GUERROUT, EL-Hachemi, MAHIOU, Ramdane, MICHELUCCI, Dominique, et al. Hidden markov random fields and cuckoo search method for medical image segmentation. arXiv preprint arXiv:2005.09377, 2020.

Abstract. Segmentation of medical images is an essential part in the process of diagnostics. Physicians require an automatic, robust and valid results. Hidden Markov Random Fields (HMRF) provide powerful model. This latter models the segmentation problem as the minimization of an energy function. Cuckoo search (CS) algorithm is one of the recent nature-inspired meta-heuristic algorithms. It has shown its efficiency in many engineering optimization problems. In this paper, we use three cuckoo search algorithm to achieve medical image segmentation.

LINK    LINK    LINK 


Using skeleton and Hough transform variant to correct skew in historical documents

BOUDRAA, Omar, HIDOUCI, Walid Khaled, et MICHELUCCI, Dominique. Using skeleton and Hough transform variant to correct skew in historical documents. Mathematics and computers in simulation, 2020, vol. 167, p. 389-403.

LINK

Abstract As a main part of several document analysis systems, Skew estimation represents one of the major research defies, particularly in case of historical documents exploration. In this paper, we propose an original skew angle detection and correction technique. Morphological Skeleton is introduced to considerably diminish the amount of data by eliminating the redundant pixels and preserving only the central curves of the image components. Next, the proposed method uses Progressive Probabilistic Hough Transform (PPHT) to find image lines. At the end, a specific procedure is applied in order to measure the global skew angle of the document image from these identified lines. Experimental results demonstrate the accuracy and the effectiveness of our approach on skew angle detection upon three popular datasets covering many types of documents of diverse linguistic writings (Chinese, Greek and English) and different styles (horizontal or vertical orientations, including figures and tables, multi-columns page layouts).

Keywords: Skew estimation, Document image analysis, Skew correction, Progressive Probabilistic Hough Transform, Morphological Skeleton


Optimizing Query Perturbations to Enhance Shape Retrieval

MOKHTARI, Bilal, MELKEMI, Kamal Eddine, MICHELUCCI, Dominique, et al. Optimizing Query Perturbations to Enhance Shape Retrieval. In : International Conference on Mathematical Aspects of Computer and Information Sciences. Springer, Cham, 2019. p. 422-437.

3D Shape retrieval algorithms use shape descriptors to identify shapes in a database that are the most similar to a given key shape, called the query. Many shape descriptors are known but none is perfect. Therefore, the common approach in building 3D Shape retrieval tools is to combine several descriptors with some fusion rule. This article proposes an orthogonal approach. The query is improved with a Genetic Algorithm. The latter makes evolve a population of perturbed copies of the query, called clones. The best clone is the closest to its closest shapes in the database, for a given shape descriptor. Experimental results show that improving the query also improves the precision and completeness of shape retrieval output. This article shows evidence for several shape descriptors. Moreover, the method is simple and massively parallel.

LINK   LINK


2019


Optimizing Query Perturbations to Enhance Shape Retrieval

Bilal Mokhtari, Kamal Eddine Melkemi, Dominique Michelucci, Sebti Foufou. Optimizing Query Perturbations to Enhance Shape Retrieval. International Conference MACIS 2019 : Mathematical Aspects of Computer and Information Sciences. Nov 13–15, 2019. Gebze, Kocaeli, Turkey.

Abstract. 3D Shape retrieval algorithms use shape descriptors to identify shapes in a database that are the most similar to a given key shape, called the query. Many shape descriptors are known but none is perfect. Therefore, the common approach in building 3D Shape retrieval tools is to combine several descriptors with some fusion rule. This article proposes an orthogonal approach. The query is improved with a Genetic Algorithm. The latter makes evolve a population of perturbed copies of the query, called clones. The best clone is the closest to its closest shapes in the database, for a given shape descriptor. Experimental results show that improving the query also improves the precision and completeness of shape retrieval output. This article shows evidence for several shape descriptors. Moreover, the method is simple and massively parallel.

Keywords: Computer vision · 3D Shape matching and recognition · Shape Retrieval · Shape Descriptors · Cloning · Genetic Algorithms.

Our copy


Using skeleton and Hough transform variant to correct skew in historical documents

Omar Boudraa, Walid Khaled Hidouci, Dominique Michelucci. Using skeleton and Hough transform variant to correct skew in historical documents. Mathematics and Computers in Simulation, May 2019. (CiteScore : 1.88& Impact Factor : 1.409)

Abstract. As a main part of several document analysis systems, Skew estimation represents one of the major research challenges, particularly in case of historical documents exploration. In this paper, we propose an original skew angle detection and correction technique. Morphological Skeleton is introduced to considerably diminish the amount of data by eliminating the redundant pixels and preserving only the central curves of the image components. Next, the proposed method uses Progressive Probabilistic Hough Transform (PPHT) to find image lines. At the end, a specific procedure is applied in order to measure the global skew angle of the document image from these identified lines. Experimental results demonstrate the accuracy and the effectiveness of our approach on skew angle detection upon three popular datasets covering many types of documents of diverse linguistic writings (Chinese, Greek and English) and different styles (horizontal or vertical orientations, including figures and tables, multi-columns page layouts).

Keywords: Skew estimation; Document image analysis; Skew correction; Progressive probabilistic Hough transform; Morphological skeleton

LINK


Degraded Historical Documents Images Binarization Using a Combination of Enhanced Techniques

Omar Boudraa, Walid Khaled Hidouci, Dominique Michelucci. Degraded Historical Documents Images Binarization Using a Combination of Enhanced Techniques.

Abstract : Document image binarization is the initial step and a crucial in many document analysis and recognition scheme. In fact, it is still a relevant research subject and a fundamental challenge due to its importance and influence. This paper provides an original multi-phases system that hybridizes various efficient image thresholding methods in order to get the best binarization output. First, to improve contrast in particularly defective images, the application of CLAHE algorithm is suggested and justified. We then use a cooperative technique to segment image into two separated classes. At the end, a special transformation is applied for the purpose of removing scattered noise and of correcting characters forms. Experimentations demonstrate the precision and the robustness of our framework applied on historical degraded documents images within three benchmarks compared to other noted methods.

Keywords : historical document image analysis, global thresholding, adaptive thresholding, hybrid algorithm, contrast enhancement.

LINK


2018


Towards a better integration of modelers and black box constraint solvers within the product design process

Jean-Philippe Pernot, Dominique Michelucci, Marc Daniel, Sebti Foufou. Towards a better integration of modelers and black box constraint solvers within the product design process. Annals of Mathematics and Artificial Intelligence https://doi.org/10.1007/s10472-018-9599-5

Abstract. This paper presents a new way of interaction between modelers and solvers to support the Product Development Process (PDP). The proposed approach extends the functionalities and the power of the solvers by taking into account procedural constraints. A procedural constraint requires calling a procedure or a function of the modeler. This procedure performs a series of actions and geometric computations in a certain order. The modeler calls the solver for solving a main problem, the solver calls the modeler’s procedures, and similarly procedures of the modeler can call the solver for solving sub-problems. The features, speci- ficities, advantages and drawbacks of the proposed approach are presented and discussed. Several examples are also provided to illustrate this approach.

Keywords: Geometric modeling · Constraints · Procedural constraints · Solver · Modeler

LINK    LINK   LINK


Equations and interval computations for some fractals

Lincong Fang, Dominique Michelucci, Sebti Foufou. Equations and interval computations for some fractals. Accepted in: Fractals, 2018.

Abstract. Very few characteristic functions, or equations, are reported so far for fractals. Such functions, called Rvachev functions in function- based modeling, are zero on the boundary, negative for inside points and positive for outside points. This paper proposes Rvachev functions for some classical fractals. These functions are convergent series, which are bounded with interval arithmetic and interval analysis in finite time. This permits to extend the RSS (Recursive Space Subdivision) method, which is classical in Computer Graphics and Interval Analysis, to fractal geometric sets. The newly proposed fractal functions can also be composed with classical Rvachev functions today routinely used in CSG (Constructive Solid Geometry) trees of Computer Graphics or function-based modeling.

Keywords: Function-based modeling, function representation, fractal, interval arithmetic

PREPRINT LINK


Conjugate Gradient method for Brain Magnetic Resonance Images Segmentation

EL-Hachemi Guerrout, Samy Ait-Aoudia1, Dominique Michelucci, and Ramdane Mahiou. Conjugate Gradient method for Brain Magnetic Resonance Images Segmentation. Conference: Conference: May 2018, 6th IFIP International Conference on Computational Intelligence and Its Applications, At Oran, Algeria.

Abstract. Image segmentation is the process of partitioning the im- age into regions of interest in order to provide a meaningful represen- tation of information. Nowadays, segmentation has become a necessity in many practical medical imaging methods as locating tumors and dis- eases. Hidden Markov Random Field model is one of several techniques used in image segmentation. It provides an elegant way to model the segmentation process. This modeling leads to the minimization of an ob- jective function. Conjugate Gradient algorithm (CG) is one of the best known optimization techniques. This paper proposes the use of the non- linear Conjugate Gradient algorithm (CG) for image segmentation, in combination with the Hidden Markov Random Field modelization. Since derivatives are not available for this expression, finite diffences are used in the CG algorithm to approximate the first derivative. The approach is evaluated using a number of publicly available images, where ground truth is known. The Dice Coefficientcient is used as an objective criterion to measure the quality of segmentation. The results show that the proposed CG approach compares favorably with other variants of Hidden Markov Random Field segmentation algorithms.

COPY COPY LINK


Approximation of pore space with ellipsoids: a comparison of a geometrical method with a statistical one

Lucie DRUOTON, Dominique MICHELUCCI, Olivier MONGA, Abdelaziz BOURAS. Approximation of pore space with ellipsoids: a comparison of a geometrical method with a statistical one. Conference SITIS 2018.

Abstract. We work with tomographic images of pore space in soil. The images have large dimensions and so in order to speed-up biological simulations (as drainage or diffusion process in soil), we want to describe the pore space with a number of geometrical primitives significantly smaller than the number of voxels in pore space. In this paper, we use the curve skeleton of a volume to segment it into some regions. We describe the method to compute the curve skeleton and to segment it with a simple segment approximation. We approximate each obtained region with an ellipsoid. The set of final ellipsoids represents the geometry of pore space and will be used in future simulations. We compare this method which we call geometrical method with the one described in the paper [8], which we name statistical method (using k-means algorithm). The previous article reports first results of a research contract with Qatar, called SIMUPOR.

LINK


Equations ou procédures ?

D. Michelucci, J-P. Pernot, M. Daniel, S. Foufou. Exposé : "Contraintes : équations ou procédures ?" Journées GTMG 2018, Aix-en-Provence.

Résumé. Aujourd’hui, tous les modeleurs géométriques de CFAO fournissent des outils pour la modélisation géo- métrique avec des contraintes. Historiquement, les contraintes géométriques en 3D ont été représentées avec des équations, et les problèmes irréductibles ont été résolus avec une variante de la méthode de Newton, ou avec la méthode d’intersection des lieux. Cet article propose de remplacer les équations par des procédures. Le modeleur appelle le solveur, qui peut appeler toutes les procédures du modeleur. À leur tour, les procédures peuvent appeler le solveur pour les sous-problèmes. Cette approche rend possibles des choses impossibles auparavant et elle facilite la spécification des contraintes de CFAO sur les formes composites avec des formats hétérogènes. Le calcul de valeurs précises pour les dérivées est toujours possible. L’utilisation d’outils pour corriger et décomposer des systèmes en sous-systèmes ou pour exploiter la faible densité est toujours possible. Il est toujours possible d’utiliser des solveurs numériques de pointe et de profiter de la faible densité. Cet article présente les avantages de cette approche, ses inconvénients, les problèmes posés et sa mise en œuvre.

Abstract. Today all CADCAM geometric modellers provide tools for geometric modelling with constraints. Historically, geometric constraints in 3D were represented with equations, and basic problems and systems were solved with some variant of Newton’s method, or with the Locus Intersection Method. This article proposes to replace equations with procedures. The modeller calls the solver, which can call every procedures of the modeller. In turn, procedures can call the solver for sub-problems. This approach makes possible things which were impossible, and it eases the specification of CADCAM constraints on composite shapes with heterogenous formats. Computing precise values for derivatives is still possible. Using tools for debugging and decomposing systems into sub-systems or for exploiting sparsity is still possible. Using state-of-the-art numerical solvers and taking advantage of sparsity is still possible. This article presents advantages, inconveniences, arising issues and implementation of this approach.

exposé article en français article in english


2017


Unsupervised geodesic convex combination of shape dissimilarity measures

Bilal Mokhtari, Kamal Eddine Melkemi, Dominique Michelucci, Sebti Foufou. Unsupervised geodesic convex combination of shape dissimilarity measures. Pattern Recognition Letters 98 (2017), 46–52.

LINK

Abstract. Dissimilarity or distance metrics are the cornerstone of shape matching and retrieval algorithms. As there is no unique dissimilarity measure that gives good performances in all possible configurations, these met- rics are usually combined to provide reliable results. In this paper we propose to compute the best linear convex, or weighted, combination of any set of measured shape distances to enhance shape matching al- gorithms. To do so, a database is represented as a graph, where nodes are shapes and the edges carry the convex combination of dissimilarity measures. Weights are chosen to maximize the weighted distances between the query shape and shapes in the database. The optimal weights are solutions of a linear pro- gramming problem. This fully unsupervised method improves the outcomes of any set of shape similarity measures as shown in our experimental results performed on several popular 3D shape benchmarks.


Hidden Markov Random Field model and Broyden Fletcher Goldfarb Shanno algorithm for Brain Image Segmentation

EL-Hachemi Guerrout, Samy Ait-Aoudia, Dominique Michelucci and Ramdane Mahiou. Hidden Markov Random Field model and Broyden Fletcher Goldfarb Shanno algorithm for Brain Image Segmentation. Journal of Experimental & Theoretical Artificial Intelligence, 2017.

Summary. Many routine medical examinations produce images of patients suffering from various pathologies. The attending physician analyzes these images and makes the appropriate di- agnosis. The number of images produced by these various examinations continues to grow consideraly. The segmentation of images is aimed to be an aid to analysis for the physician. It consists in dividing the image into homogeneous and significant regions. We will focus on Hid- den Markov Random Fields referred to as HMRF to model the problem of segmentation. This modelisation leads to a classical function minimization problem. Broyden-Fletcher-Goldfarb- Shanno algorithm referred to as BFGS is one of the most powerful methods to solve uncon- strained optimization problem. In this paper we will investigate the combination of HMRF and BFGS algorithm to perform the segmentation operation. The tests will be conducted on a brain magnetic resonance image database largely used to objectively confront the results obtained.

Keywords: Brain image segmentation; Hidden Markov Random Field; Broyden-Fletcher-Goldfarb-Shanno algorithm; Dice coefficient

LINK


Combination of Hidden Markov Random Field and Conjugate Gradient for Brain Image Segmentation

EL-Hachemi Guerrout, Samy Ait-Aoudia1, Dominique Michelucci, and Ramdane Mahiou. Combination of Hidden Markov Random Field and Conjugate Gradient for Brain Image Segmentation. ICPRAM 2017, 6th International Conference on Pattern Recognition Applications and Methods, Holiday Inn Porto Gaia, Porto, Portugal, February 24–26, 2017.

Keywords: Brain image segmentation, Hidden Markov Random Field, The Conjugate Gradient algorithm.

Abstract: Image segmentation is the process of partitioning the image into different regions, meaningful and easier to analyze. Nowadays, the segmentation became a necessity in many practical medical imaging like locating tumors and pathologies. The Hidden Markov Random Field model is one of several techniques used in image segmentation. It provides an elegant way to model the segmentation problem. This modeling leads to the minimization of an objective function. The Conjugate Gradient algorithm (CG) is one of the most well known optimization techniques. In this paper we show the combination of Hidden Markov Random Field model and the Conjugate Gradient algorithm to achieve a good segmentation of brain images.

COPY LINK


2016


Segmentation by Tangent Filter

Mohamed Boulakradeche, Sami Ait-Aoudia, Dominique Michelucci, Kawther Taibouni and Abir Farouzi. Segmentation by Tangent Filter. MEDPRAI’2016, First Mediterranean Conference on Pattern Recognition and Artificiel Intelligence, November 22-23, Tebessa, Algeria.

Abstract. Image contour detection is fundamental to many image analysis applications, including image segmentation, object recognition and classification.In this paper, the main idea is to treat the pixel with a special mask for each pixel based on a criterion the orientation of the gradient. The masks used are inspired by the operators of Laplacian. The pixel processed is not in the center of the mask but on its tangent. The results obtained are very positive on certain types of images.

LINK MEDPRAI’2016.   LINK


Hidden Markov Random Field model and BFGS algorithm for Brain Image Segmentation

EL-Hachemi Guerrout, Samy Ait-Aoudia1, Dominique Michelucci, and Ramdane Mahiou. Hidden Markov Random Field model and BFGS algorithm for Brain Image Segmentation. MEDPRAI’2016. The first Mediterranean Conference on Pattern Recognition and Artificial Intelligence. November 22–23, 2016, Tebessa, Algeria.

LINK ResearchGate

  Abstract. Brain MR images segmentation has attracted a particular fo- cus in medical imaging. The automatic image analysis and interpretation became a necessity. The segmentation is one of key operations to pro- vide a crucial decision support to physicians. Its goal is to simplify the representation of an image to items meaningful and easier to analyze. The Hidden Markov Random Fields (HMRF) provide an elegant way to model the segmentation problem. This model leads to the minimiza- tion problem of a function. BFGS (Broyden-Fletcher-Goldfarb-Shanno algorithm) is one of the most powerful methods to solve unconstrained optimization problem. This paper presents how we combine HMRF and BFGS to achieve a good segmentation. The Brain image segmentation quality is evaluated on ground-truth images, using the Dice coefficient.


Thèse de B. Mokhtari : Appariement de formes, recherche par forme clef

Bilal Mokhtari’s PhD thesis. Bilal was co-supervized by D. Michelucci (University of Burgundy, Dijon, France) and K. Melkemi (Biskra University, Algeria).

Mémoire de thèse, en Français, de Bilal MOKHTARI. Appariement de formes, recherche par forme clef. Thèse en co-tutelle entre l’Université de Bourgogne et l’Un iversité de Biskra, Algérie.

Transparents de la soutenance, en français.

Résumé. Cette thèse porte sur l’appariement des formes, et la recherche par forme clef. Elle décrit quatre contributions à ce domaine. La première contribution est une amélioration de la méthode des nuées dynamiques pour partitionner au mieux les voxels à l’intérieur d’une forme donnée; les partitions obtenues permettent d’apparier les objets par un couplage optimal dans un graphe biparti. La seconde contribution est la fusion de deux descripteurs, l’un local, l’autre global, par la règle du produit. La troisième contribution considère le graphe complet, dont les sommets sont les formes de la base ou la requête, et les arêtes sont étiquetées par plusieurs distances, une par descripteur; ensuite cette méthode calcule par programmation linéaire la combinaison convexe des distances qui maximise soit la somme des longueurs des plus courts chemins entre la requête et les objets de la base de données, soit la longueur du plus court chemin entre la requête et l’objet comparé à la requête. La quatrième contribution consiste à perturber la requête avec un algorithme génétique pour la rapprocher des formes de la base de données, pour un ou des descripteur(s) donné(s); cette méthode est massivement parallèle, et une architecture multi-agent est proposée. Ces méthodes sont comparées aux méthodes classiques, et ont de meilleures performances, en terme de précision.

Mots-clés : Appariement de formes, descripteurs de formes, recherche par forme clef.

  Abstract: This thesis concerns shape matching and shape retrieval. It describes four contributions to this domain. The first is an improvement of the k-means method, in order to find the best partition of voxels inside a given shape; these best partitions permit to match shapes using an optimal matching in a bipartite graph. The second contribution is the fusion of two descriptors, one local, the other global, with the product rule. The third contribution considers the complete graph, the vertices of which are the shapes in the database and the query. Edges are labelled with several distances, one per descriptor. Then the method computes, with linear programming, the convex combination of distances which maximizes either the sum of the lengths of all shortest paths from the query to all shapes of the database, or the length of the shortest path in the graph from query to the current shape compared to query. The fourth contribution consists in perturbing the shape query, to make it closer to shapes in the database, for any given descriptors. This method is massively parallel and a multi-agent architecture is proposed. These methods are compared to classical methods in the field, they achieve better retrieval performances.

Keywords: Shape matching, shape descriptors, dissimilarity measures, shape retrieval


Solving with or Without Equations

D. Michelucci. Solving with or Without Equations. Invited talk at ADG 2016, Automated Deduction in Geometry, 2016, Strasbourg, France.

  Abstract. In Geometric Constraints Solving, especially for CADCAM applications, equations are often replaced with algorithms: they are more general and more convenient than equations, e.g., consider defining the distance between a point and a segment. Also CADCAM uses piecewise polynomials, which are not polynomials. It needs optimization (for orthogonal projection of a point on a non linear surface), algorithmic shapes (staircase, mechanisms) aka features or parametric shapes, subdivision surfaces which have no equation, etc. Yet GCS in CADCAM can still benefit from symbolic tools like DAGs (Directed Acyclic Graph) and dual numbers. The latter permits to compute exactly derivatives. Finally, we conjecture that at least some algorithms can be converted to systems of equations: the if-then-else instruction and the sign function are indeed converted into systems of polynomial equations. Thus Computer Algebra applies to piecewise polynomials, after all, and maybe more.

Extended abstract Slides


A 3D shape matching and retrieval approach based on fusion of curvature and geometric diffusion features

Bilal Mokhtari, Kamal Melkemi, Dominique Michelucci, Sebti Foufou. A 3D shape matching and retrieval approach based on fusion of curvature and geometric diffusion features. IJCAT, Interntional Journal of Computer Applications in Technology from Inderscience Publishers.

  Abstract. The majority of shape matching and retrieval methods use only one single shape descriptor. Unfortunately, no shape descriptor is sufficient to provide suitable results for all kinds of shapes. The most common way to improve the performance of shape descriptors is to fuse them.

In this paper, we propose a new 3D matching and retrieval approach based on a fully unsupervised fusion of curvature and geometric diffusion descriptors. In fact, to improve retrieval precision, we use two descriptors based on local and global features extracted from a shape, and automatically combine theses features using a fusion method called Product rule. The Product rule combines values assigned to vertices by the two descriptors. This fusion rule gives better results compared to other well-known fusion schemes such as Max, Min and Linear rules. The proposed approach improves considerably the retrieval precision even with pose changes. This is shown through the retrieval results obtained on several popular 3D shape benchmarks.

Keywords. 3D shape matching ; 3D shape retrieval ; curvature-based features ; diffusion geometry ; combination schemes; feature fusion; 3D shape descriptors; triangular meshes; similarity measures.

LINK


Variational Geometric Modelling with Black Box Constraints and DAGs

Gilles Gouaty, Lincong Fang, Dominique Michelucci, Marc Daniel, Jean-Philippe Pernot, Romain Raffin, Sandrine Lanquetin, Marc Neveu. Variational Geometric Modelling with Black Box Constraints and DAGs. Journal Computer-Aided Design, Elsevier editor.

Abstract: CAD modelers enable designers to construct complex 3D shapes with high-level B-Rep operators. This avoids the burden of low level geometric manipulations. However a gap still exists between the shape that the designers have in mind and the way they have to decompose it into a sequence of modeling steps. To bridge this gap, Variational Modeling enables designers to specify constraints the shape must respect. The constraints are converted into an explicit system of mathematical equations (potentially with some inequalities) which the modeler numerically solves. However, most of available programs are 2D sketchers, basically because in higher dimension some constraints may have complex mathematical expressions. This paper introduces a new approach to sketch constrained 3D shapes. The main idea is to replace explicit systems of mathematical equations with (mainly) Computer Graphics routines considered as Black Box Constraints. The obvious difficulty is that the arguments of all routines must have known numerical values. The paper shows how to solve this issue, i.e. how to solve and optimize without equations. The feasibility and promises of this approach is illustrated with the developed DECO (Deformation by Constraints) prototype.

sciencedirect LINK   LINK


Hidden Markov Random Fields and Direct Search Methods for Medical Image Segmentation

El-Hachemi Guerrout, Samy Ait-Aoudia, Dominique Michelucci and Ramdane Mahiou. Hidden Markov Random Fields and Direct Search Methods for Medical Image Segmentation. ICPRAM 2016, International Conference on Pattern Recognition Applications and Methods. Rome, February 24–26, 2016.

Abstract. The goal of image segmentation is to simplify the representation of an image to items meaningful and easier to analyze. Medical image segmentation is one of the fundamental problems in image processing field. It aims to provide a crucial decision support to physicians. There is no one way to perform the segmentation. There are several methods based on HMRF. Hidden Markov Random Fields (HMRF) constitute an elegant way to model the problem of segmentation. This modelling leads to the minimization of an energy function. In this paper we investigate direct search methods that are Nelder-Mead and Torczon methods to solve this optimization problem. The quality of segmentation is evaluated on grounds truths images using the Kappa index called also Dice Coefficient (DC). The results show the supremacy of the methods used compared to others methods.

LINK


2015


An improved star test for implicit polynomial objects

Lincong Fang, Dominique Michelucci, Sebti Foufou. An improved star test for implicit polynomial objects. Journal Computer-Aided Design, Elsevier editor. Available on Line 21 July 2015. Accepted in conference SPM Solid and Physical Modeling 2015, Salt Lake city.

Abstract. For a given point set, a particular point is called a star if it can see all the boundary points of the set. The star test determines whether a candidate point is a star for a given set. It is a key component of some topology computing algorithms such as Connected components via Interval Analysis (CIA), Homotopy type via Interval Analysis (HIA), etc. Those algorithms decompose the input object using axis-aligned boxes, so that each box is either not intersecting or intersecting with the object and in this later case its center is a star point of the intersection. Graphs or simplicial complexes describing the topology of the objects can be obtained by connecting these star points following different rules. The star test is performed for simple primitive geometric objects, because complex objects can be constructed using Constructive Solid Geometry (CSG), and the star property is preserved via boolean operations. In this paper, we improve the method to perform the test for implicit objects. For a primitive set defined by an implicit polynomial equation, the polynomial is made homogeneous with the introduction of an auxiliary variable, thus the degree of the star condition is reduced. A linear programming optimization is introduced to further improve the performance. Several examples are illustrated to show the experimental results of our method.

LINK   ScienceDirect LINK


Re-parameterization reduces irreducible geometric constraint systems

Hichem Barki, Lincong Fang, Dominique Michelucci, Sebti Foufou. Re-parameterization reduces irreducible geometric constraint systems. Journal Computer-Aided Design, Elsevier editor. Available on Line 29 July 2015. Accepted in conference SPM Solid and Physical Modeling 2015, Salt Lake city.

Abstract. You recklessly told your boss that solving a non-linear system of size n ( n unknowns and n equations) requires a time proportional to n , as you were not very attentive during algorithmic complexity lectures. So now, you have only one night to solve a problem of big size (e.g., 1000 equations/unknowns), otherwise you will be fired in the next morning. The system is well-constrained and structurally irreducible: it does not contain any strictly smaller well-constrained subsystems. Its size is big, so the Newton–Raphson method is too slow and impractical. The most frustrating thing is that if you knew the values of a small number k n of key unknowns, then the system would be reducible to small square subsystems and easily solved. You wonder if it would be possible to exploit this reducibility, even without knowing the values of these few key unknowns. This article shows that it is indeed possible. This is done at the lowest level, at the linear algebra routines level, so that numerous solvers (Newton–Raphson, homotopy, and also p -adic methods relying on Hensel lifting) widely involved in geometric constraint solving and {CAD} applications can benefit from this decomposition with minor modifications. For instance, with k n key unknowns, the cost of a Newton iteration becomes O ( k n 2 ) instead of O ( n 3 ) . Several experiments showing a significant performance gain of our re-parameterization technique are reported in this paper to consolidate our theoretical findings and to motivate its practical usage for bigger systems.

ScienceDirect link LINK  


Extending CSG with projections: Towards formally certified geometric modeling

George Tzoumas, Dominique Michelucci, Sebti Foufou. Extending CSG with projections: Towards formally certified geometric modeling. Computer-Aided Design. Volume 66, September 2015, Pages 45–54.

We extend traditional Constructive Solid Geometry (CSG) trees to support the projection operator. Existing algorithms in the literature prove various topological properties of CSG sets. Our extension readily allows these algorithms to work on a greater variety of sets, in particular parametric sets, which are extensively used in CAD/CAM systems. Constructive Solid Geometry allows for algebraic representation which makes it easy for certification tools to apply. A geometric primitive may be defined in terms of a characteristic function, which can be seen as the zero-set of a corresponding system along with inequality constraints. To handle projections, we exploit the Disjunctive Normal Form, since projection distributes over union. To handle intersections, we transform them into disjoint unions. Each point in the projected space is mapped to a contributing primitive in the original space. This way we are able to perform gradient computations on the boundary of the projected set through equivalent gradient computations in the original space. By traversing the final expression tree, we are able to automatically generate a set of equations and inequalities that express either the geometric solid or the conditions to be tested for computing various topological properties, such as homotopy equivalence. We conclude by presenting our prototype implementation and several examples.

Science Direct LINK. See draft below.


Solving the pentahedron problem

Barki, H., Cane, J. M., Garnier, L., Michelucci, D., & Foufou, S. (2015). Solving the pentahedron problem. Computer-Aided Design, 58, 200-209.

See related publication in 2014, in SPM 2014.

Science Direct LINK


Draft: Extending CSG with Projections

George Tzoumas, Dominique Michelucci, Sebti Foufou. Extending CSG with Projections: Towards Formally Certified Geometric Modeling. Submitted for publication.

We extend traditional Constructive Solid Geometry (CSG) trees to support the projection operator. Existing algorithms in the literature prove various topological properties of CSG sets. Our extension readily allows these algorithms to work on a greater variety of sets, in particular parametric sets, which are extensively used in CAD/CAM systems. Constructive Solid Geometry allows for algebraic representation which makes it easy for certification tools to apply. A geometric primitive may be defined in terms of a characteristic function, which can be seen as the zero-set of a corresponding system along with inequality constraints. To handle projections, we exploit the Disjunctive Normal Form, since projection distributes over union. To handle intersections, we transform them into disjoint unions. Each point in the projected space is mapped to a contributing primitive in the original space. This way we are able to perform gradient computations on the boundary of the projected set through equivalent gradient computations in the original space. By traversing the final expression tree, we are able to automatically generate a set of equations and inequalities that express either the geometric solid or the conditions to be tested for computing various topological properties, such as homotopy equivalence. We conclude by presenting our prototype implementation and several examples.

Keywords: geometric modeling, constructive solid geometry, projection, homotopy equivalence, topological properties, formal methods, constraint solving, disjunctive normal form

PDF


2014


Final report Qatar project NPRP number 09-906-1-137

Sebti Foufou, Dominique Michelucci. Final report Qatar project NPRP number 09-906-1-137. Design and implementation of a new geometric constraint solver based on Bernstein polytopes. Final report, May 2014. PDF


Solving the pentahedron problem

Hichem Barki, Jean-Marc Cane, Lionel Garnier, Dominique Michelucci, Sebti Foufou. Solving the pentahedron problem. Accepted in SPM 2014 (Solid and Physical Modeling, Hong-Kong), also published in Computer-Aided Design, 2015.

Abstract: Nowadays, all geometric modelers provide some tools for specifying geometric constraints. The 3D pentahedron problem is an example of a 3D Geometric Constraint Solving Problem (GCSP), composed of six vertices, nine edges, five faces (two triangles and three quadrilaterals), and defined by the lengths of its edges and the planarity of its quadrilateral faces. This problem seems to be the simplest non-trivial problem, as the methods used to solve the Stewart platform or octahedron problem fail to solve it. The naive algebraic formulation of the pentahedron yields an under-constrained system of twelve equations in eighteen unknowns. Even if the use of placement rules transforms the pentahedron into a well-constrained problem of twelve equations in twelve unknowns, the resulting system is still hard to solve for interval solvers. In this work, we focus on solving the pentahedron problem in a more efficient and robust way, by reducing it to a well-constrained system of three equations in three unknowns, which can be solved by any interval solver, avoiding by the way the use of placement rules since the new formulation is already well-constrained. Several experiments showing a considerable performance enhancement (×42) are reported in this paper to consolidate our theoretical findings. Throughout this paper, we also emphasize some interesting properties of the solution set, by showing that for a generic set of parameters, solutions in the form of 3D parallel edge pentahedra do exist almost all the time, and by providing a geometric construction for these solutions. The pentahedron problem also admits degenerate 2D solutions in finite number. This work also studies how these interesting properties generalize for other polyhedra.


Re-paramétrisation et réduction des systèmes irréductibles

Jean-Marc Cane, Arnaud Kubicki, Dominique Michelucci, Hichem Barki, Sebti Foufou. Re-paramétrisation et réduction des systèmes irréductibles. REFIG: Revue électronique francophone d’informatique graphique, 2014.

Abstract. You have to solve a system of n non linear equations in n unknowns. The system is well constrained, and stucturally irreducible : it contains no strictly smaller well constrained subsystem. Its size is big, for example n = 1000, so the Newton-Raphson method is too slow. The most frustrating thing is that if you knew the values of a small number k n of some unknowns, then the system would be reducible into small square subsystems and easily solvable. You wonder if it would not be possible to exploit this reducibility, even without knowing the values of key unknowns. This article shows that it is indeed possible. This is done at the lowest level, at the linear algebra routines level, so that numerous solvers (Newton-Raphson, homotopy, and also p-adic method relying on Hensel lifting) can benefit from this decomposition 2 with minor modifications : for example, with k n key unknowns, the cost of a Newton iteration becomes O(kn2) instead of O(n3).

Résumé. Vous devez résoudre un système de n équations non linéaires en n inconnues. Ce système est bien-contraint et structurellement irréductible : il ne contient pas de sous-système bien-contraint strictement plus petit. Il est de grande taille, par exemple n = 1000, si bien que la méthode de Newton-Raphson est trop lente. Le plus frustrant est que, si vous connaissiez la valeur de certaines inconnues clefs en très petit nombre (par exemple k = 5 n), alors le système se décomposerait en de nombreux sous-systèmes bien-contraints plus petits, et serait facilement soluble. Vous vous demandez s’il ne serait pas possible d’utiliser cette décomposition, bien que vous ne connaissiez pas les valeurs des inconnues clefs. Cet article montre que c’est possible. Ceci est fait au plus bas niveau, celui des procédures d’algèbre linéaire, si bien que de nombreux solveurs peuvent bénéficier de cette décomposition. Par exemple, avec k n inconnues clefs, le coût d’une itération de la méthode de Newton-Raphson chute de O(n3) à O(kn2).

Freely available on REFIG website


Witness computation for solving geometric constraint systems

Arnaud Kubicki, Sebti Foufou, Dominique Michelucci. Witness computation for solving geometric constraint systems. SAI’2014, Science and Information Conference 2014, pp. 759–770, August 27-29, 2014, London, UK.

Abstract. In geometric constraint solving, the constraints are represented with an equation system F(U,X) = 0, where X denotes the unknowns and U denotes a set of parameters. The target solution for X is noted XT. A witness is a couple (UW,XW) such that F(UW,XW) = 0. The witness is not the target solution, but they share the same combinatorial features, even when the witness and the target lie on two distinct connected components of the solution set of F(U,X) = 0. Thus a witness enables the qualitative study of the system: the detection of over- and under-constrained systems, the decomposition into irreducible subsystems, the computation of subsystems boundaries.

This paper investigates the witness computation in various configurations. The witness computation will be studied under several numerical methods: Newton iterations from random seeds either in and , the Broyden-Fletcher-Goldfarb-Shanno method, the Nelder-Mead simplex method. The robustness and performances of these methods will be analyzed and compared. The paper also presents the numerical probabilistic method from which the witness method was originated, and shows how the witness can be used for detecting dependent parameters within systems of geometric constraints.

Our own copy


New Geometric Constraint Solving Formulation: Application to the 3D Pentahedron

Hichem Barki, Jean-Marc Cane, Dominique Michelucci, and Sebti Foufou. New Geometric Constraint Solving Formulation: Application to the 3D Pentahedron. ICISP 2014, International Conference on Image and Signal Processing, June 30-July 2, 2014, Cherbourg, Normandy, France.

Abstract. Geometric Constraint Solving Problems (GCSP) are nowadays routinely investigated in geometric modeling. The 3D Pentahedron problem is a GCSP defined by the lengths of its edges and the planarity of its quadrilateral faces, yielding to an under-constrained system of twelve equations in eighteen unknowns. In this work, we focus on solving the 3D Pentahedron problem in a more robust and efficient way, through a new formulation that reduces the underlying algebraic formulation to a well-constrained system of three equations in three unknowns, and avoids at the same time the use of placement rules that resolve the under- constrained original formulation. We show that geometric constraints can be specified in many ways and that some formulations are much better than others, because they are much smaller and they avoid spurious degenerate solutions. Several experimentations showing a considerable performance enhancement (×42) are reported in this paper to consolidate our theoretical findings.

Our own copy LINK


Data structures and algorithms for topological analysis

Jean-Marc CANE, George M.TZOUMAS, Dominique MICHELUCCI, Marta HIDALGO, Sebti FOUFOU. Data structures and algorithms for topological analysis. THESAI’2014, Science and Information Conference 2014, pp. 302–312, August 27-29, 2014, London, UK.

Abstract. One of the steps of geometric modeling is to know the topology and/or the geometry of the objects considered. This paper presents different data structures and algorithms used in this study. We are particularly interested by algebraic structures, eg homotopy and homology groups, the Betti numbers, the Euler characteristic, or the Morse-Smale complex. We have to be able to compute these data structures, and for (homotopy and homology) groups, we also want to compute their generators. We are also interested in algorithms CIA and HIA presented in the thesis of Nicolas DELANOUE, which respectively compute the connected components and the homotopy type of a set defined by a CSG (constructive solid geometry) tree. We would like to generalize these algorithms to sets defined by projection.

Our own copy


Extending constructive solid geometry to projections and parametric objects

George Tzoumas, Dominique Michelucci, Sebti Foufou. Extending constructive solid geometry to projections and parametric objects. Proceedings of TMCE 2014, May 19-23, 2014, Budapest, Hungary, ISBN 978-94-6186-177-1.

Abstract. Constructive Solid Geometry (CSG) is a widely used representation scheme in Computer Aided Design and Manufacturing (CAD/CAM). Solids are represented as Boolean constructions via regularized set operators. CSG representations are essentially binary expression trees with non-terminal nodes representing operators and terminal nodes typically representing primitive solids (like spheres). Existing algorithms in the literature compute various properties of CSG sets like connectivity and topology. In this paper we extend traditional CSG trees to support the projection operator. Such an extension allows existing algorithms for CSG sets to work on a greater variety of sets, in par- ticular parametric sets, which are extensively used in CAD/CAM systems. A geometric primitive may be defined in terms of a characteristic function, which can be seen as the zero-set of a corresponding system along with inequality constraints. To handle projections, we exploit the Disjunctive Normal Form, since projection distributes over union. To handle intersections, we introduce the notion of dominant sets that al- lows us to express intersections as disjoint unions. Finally, we introduce the join operator as a means to deal with primitives consisting of more than one manifold. Our algorithm, based on a traversal of the final expres- sion tree, generates a set of geometric constraints in the form of equations and inequalities that express the same set, avoiding extra unknowns like Lagrange multipliers. We conclude with implementation notes and possible extensions.

Our own copy


An emptiness test and a star test for patches

Lincong Fang, Jean-marc Cane, Dominique Michelucci, Sebti Foufou. An emptiness test and a star test for patches. Proceedings of TMCE 2014, May 19-23, 2014, Budapest, Hungary, ISBN 978-94-6186-177-1.

Abstract. Approaches of emptiness test and star test for para- metric patches are discussed in this paper. A para- metric patch may contain multiple points and/or crit- ical points, existences of different points in a studied box are discussed respectively. Star test will facilitate topology analysis algorithms of geometric sets, such as C.I.A. (Connected components via Interval Analy- sis) and H.I.A. (Homotopy type via Interval Analysis). We implement a method of star test for Bézier patches, which contain no loops composed by critical curves. Since interval arithmetic method used in tests for im- plicit sets, tests for patches are based on the de Castel- jau algorithm. Our methods may fail because of insuf- ficient accuracy or some particular patches, in all these cases, an error code shall be returned as a result.

Our own copy


Dynamic Clustering-based Method for Shape Recognition and Retrieval

Bilal Mokhtari, Kamal Eddine Melkemi, Dominique Michelucci and Sebti Foufou. Dynamic Clustering-based Method for Shape Recognition and Retrieval. Proceedings of TMCE 2014, May 19-23, 2014, Budapest, Hungary, ISBN 978-94-6186-177-1.

  Abstract. This paper presents a shape matching framework based on a new shape decomposition approach. A new region-based shape descriptor is proposed to compute the best match between given 2D or 3D shapes. In order to find similar shapes in a database, we first split the interior of each shape into the adequate set of parts, classes, or ellipsoids, then find the corresponding parts between different shapes, and finally compute their similarity.

Essentially, we compute the best shape decomposition into k classes using an improved version of the k- means clustering algorithm without prior fixing of the number of parts. Additionally, we propose a new tool which determines the best ellipsoids packing in order to efficiently represent a shape according to its components.

The shape recognition process compares the optimal ellipsoidal partition of the new shape with the different models of a database and extracts the closest shapes. The performances of our shape matching framework are shown through experiments on various data of MPEG-7 and benchmark databases.

Our own copy


2013


Reliable Outer Bounds for the Dual Simplex Algorithm with Interval Right-hand Side

Christoph Fuenfzig, Dominique Michelucci, Sebti Foufou. Reliable Outer Bounds for the Dual Simplex Algorithm with Interval Right-hand Side In ADVCOMP 2013, The Seventh International Conference on Advanced Engineering Computing and Applications in Sciences (pp. 49-54).

Abstract: In this article, we describe the reliable computation of outer bounds for linear programming problems occuring in linear relaxations derived from the Bernstein polynomials. The computation uses interval arithmetic for the Gauss-Jordan pivot steps on a simplex tableau. The resulting errors are stored as interval right hand sides. Additionally, we show how to generate a start basis for the linear programs of this type. We give details of the implementation using OpenMP and comment on numerical experiments.

LINK    our copy


Solution isolation strategies for the Bernstein polytopes-based solver

Djedaini, M., Barki, H., Foufou, S., and Michelucci, D. (2013, November). Solution isolation strategies for the Bernstein polytopes-based solver. In GCC Conference and Exhibition (GCC), 2013 7th IEEE (pp. 234-239). IEEE.

Astract. The Bernstein polytopes-based solver is a new method developed to solve systems of nonlinear equations, which often occur in Geometric Constraint Solving Problems. The principle of this solver is to linearize nonlinear monomials and then to solve the resulting linear programming problems, through linear programming. However, without any strategy for the isolation of the many solutions of multiple-solution systems, this solver is slow in practice. To overcome this problem, we propose in this work, a study of several strategies for solution isolation, through the split of solution boxes into several subboxes, according to three main steps answering the questions: when, where, and how to perform a split? We provide a detailed benchmark evaluating both time and space complexities for the proposed splitting strategies, applied to several Geometric Constraint Solving Problems widely encountered in geometric modeling. We also compare several linear programming solvers within our Bernstein solver.

LINK


2012


Kernel functions and coordinate-free equations of implicit curves and surfaces. Dominique Michelucci and Sebti Foufou. Dijon, Burgundy university, France. Unpublished article.

  Abstract. Recently several authors propose to use coordinate free formulations of geometric constraints. Thus the question arises to formulate in such a way all geometric constraints, such as: six 2D points must lie on a common conic curve, ten 2D points must lie on a common 2D cubic, ten 3D points must lie on a common quadric, etc. This draft gives (for the first time, as far as we know) an intrinsic condition involving only scalar products of vectors, first, for coplanar points to lie on a common degree d algebraic curve, and, second, for 3D points to lie on a common degree d algebraic surface. Kernel functions are used; they permit to extend relations à la Cayley-Menger.


Interrogating witnesses for geometric constraint solving, Journal Information and Computation

Interrogating witnesses for geometric constraint solving by Sebti Foufou and Dominique Michelucci, Information and Computation, Volume 216, July 2012, Pages 24-38.

Our own copy

Classically, geometric constraint solvers use graph-based methods to decompose systems of geometric constraints. These methods have intrinsic limitations, which the witness method overcomes; a witness is a solution of a variant of the system. This paper details the computation of a basis of the vector space of free infinitesimal motions of a typical witness, and explains how to use this basis to interrogate the witness for dependence detection. The paper shows that the witness method detects all kinds of dependences: structural dependences already detectable by graph-based methods, but also non-structural dependences, due to known or unknown geometric theorems, which are undetectable by graph-based methods. It also discusses how to decide about the rigidity of a witness and how to decompose it.


Polytope-based computation of polynomial ranges, Journal of CAGD 2012

Polytope-based computation of polynomial ranges In Journal of Computer Aided Geometric Design. 2012, 18-29.

Polynomial ranges are commonly used for numerically solving polynomial systems with interval Newton solvers. Often ranges are computed using the convex hull property of the tensorial Bernstein basis, which is exponential size in the number n of variables. In this paper, we consider methods to compute tight bounds for polynomials in n variables by solving two linear programming problems over a polytope. We formulate a polytope defined as the convex hull of the coefficients with respect to the tensorial Bernstein basis, and we formulate several polytopes based on the Bernstein polynomials of the domain. These Bernstein polytopes can be defined by a polynomial number of halfspaces. We give the number of vertices, the number of hyperfaces, and the volume of each polytope for n = 1,2,3,4, and we compare the computed range widths for random n-variate polynomials for n 10. The Bernstein polytope of polynomial size gives only marginally worse range bounds compared to the range bounds obtained with the tensorial Bernstein basis of exponential size.


Dominique Michelucci , Sebti Foufou, Arnaud Kubicki. On the complexity of the Bernstein combinatorial problem

This article proves that the Bernstein combinatorial problem is NP-hard.

Every multivariate polynomial p(x), x = (x1,,xn) in [0,1]n, is enclosed in the interval given by the smallest and the greatest of its coefficients in the Tensorial Bernstein Basis (TBB). Knowing that the total number of these TBB coefficients is exponential with respect to the number of variables n, i=1n(1 + di), even if all partial degrees di equal 1, a combinatorial problem arises: is it possible to compute in polynomial time the smallest and the greatest coefficients? This article proves that the 3-SAT problem, known to be NP-complete, polynomially reduces to the above defined combinatorial problem, which let us consequently conclude that this problem is NP-hard.

Submitted for publication in 2012. Accepted for publication in 2012 in the Special Issue of the journal Reliable Computing: on the Use of Bernstein Polynomials in Reliable Computing. Guest Editors: Jürgen Garloff and Andrew P. Smit.

Our own copy


Sebti Foufou and Dominique Michelucci. Bernstein basis and its applications in solving geometric constraint systems

This article is an introduction to Bernstein based solvers: the simplest solvers convert polynomials into the tensorial Bernstein bases (TBB), and the smallest and the greatest TBB coefficients give tight lower and upper bounds of the values of the polynomial over the unit hypercube. But a combinatorial problem arises: there is an exponential number of TBB polynomials and coefficients, while the number of coefficients in the tensorial canonical basis stays polynomial (in the number of variables) most of the time. This is the combinatorial Bernstein problem. We introduced Bernstein polytopes to bypass this issue.

Submitted for publication in 2012. Accepted for publication in 2012 in the Special Issue of the journal Reliable Computing: on the Use of Bernstein Polynomials in Reliable Computing. Guest Editors: Jürgen Garloff and Andrew P. Smit.

Our own copy


2011


Linear Programming for Bernstein Based Solvers, ADG

Linear Programming for Bernstein Based Solvers, Dominique Michelucci and Christoph Fuenfzig, Automated Deduction in Geometry Lecture Notes in Computer Science, 2011, Volume 6301/2011, 163-178, DOI: 10.1007/978-3-642-21046-4_8.

Our own copy

Some interval Newton solvers rely on tensorial Bernstein bases to compute sharp enclosures of multivariate polynomials on the unit hypercube. These solvers compute all coefficients with respect to tensorial Bernstein bases. Unfortunately, polynomials become exponential size in tensorial Bernstein bases. This article gives the first polynomial time method to solve this issue. A polynomial number of relevant Bernstein polynomials is selected. The non-negativity of each of these Bernstein polynomials gives a linear inequality in a space connected to the monomials of the canonical tensorial basis. We resort to linear programming on the resulting Bernstein polytope to compute range bounds of a polynomial or bounds of the zero set.


PhD Report. TAHARI Abdou El Karim. Modélisation géométrique par contraintes : solveurs basés sur l’arithmétique des intervalles

TAHARI Abdou El Karim. Modélisation géométrique pr contraintes : solveurs basés sur l’arithmétique des intervalles. Thèse. Ecole Nationale Supérieure d’Informatique. Algérie. 92 pages. Soutenue en Novembre 2011. Ecole Nationale Supérieure d’Informatique. LINK ESI webpage    LINK   LINK


Extensions of the witness method to characterize under-, over- and well-constrained geometric constraint systems

This article: Extensions of the witness method to characterize under-, over- and well-constrained geometric constraint systems by Simon Thierry, Pascal Schreck, Dominique Michelucci, Christoph Fuenfzig, Jean-David Géneveaux, was published in CAD Volume 43, Issue 10, October 2011, Pages 1234–1249 (Solid and Physical Modeling 2010).

This paper describes new ways to tackle several important problems encountered in geometric constraint solving, in the context of CAD, and which are linked to the handling of under- and over-constrained systems. It presents a powerful decomposition algorithm of such systems.

Our methods are based on the witness principle whose theoretical background is recalled in a first step. A method to generate a witness is then explained. We show that having a witness can be used to incrementally detect over-constrainedness and thus to compute a well-constrained boundary system. An algorithm is introduced to check if anchoring a given subset of the coordinates brings the number of solutions to a finite number.

An algorithm to efficiently identify all maximal well-constrained parts of a geometric constraint system is described. This allows us to design a powerful algorithm of decomposition, called W-decomposition, which is able to identify all well-constrained subsystems: it manages to decompose systems which were not decomposable by classic combinatorial methods.

LINK


2010


Bases tensorielles de Bernstein et solveurs (in French)

Bases tensorielles de Bernstein et solveurs. A. K. Tahari, C. Fuenfzig, D. Michelucci, S. Foufou, S. Ait Aoudia. Technique et Science Informatiques, vol 29/6, 2010, pp 629–663.

Personal copy

LINK

Tensorial Bernstein bases can be used to compute sharp ranges of the values of polynomials over a box, and to solve systems of polynomial equations in Computer Graphics, Geometric Modelling, Geometric Constraints Solving. Two kinds of solvers are presented. The first is classical, and applies to small systems, up to 6 or 7 unknowns. The second is new and applies to systems of arbitrary size. It requires a new mathematical object, the Bernstein polytope.


Rigorous Outer Bounds with the Dual Simplex Algorithm in Tableau Form, SCAN 2010

C. Fuenfzig, D. Michelucci, S. Foufou. Rigorous Outer Bounds with the Dual Simplex Algorithm in Tableau Form presented at SCAN 2010, Ecole Normale Supérieure, Lyons, France, 2010.

In this paper, we describe how to compute rigorous outer bounds for linear programming problems occuring in linear relaxations derived from the Bernstein polynomials. The computation uses interval arithmetic for the Gauss-Jordan pivoting steps on the tableau. The resulting errors are stored as interval right hand sides. Additionally, we show how to generate a start basis for the linear programs of this type. We give details of the implementation and comment on numerical experiments.


Optimizations for Bernstein-based solvers using domain reduction

Christoph Fuenfzig Dominique Michelucci Sebti Foufou, Optimizations for Bernstein-based solvers using domain reduction. Proceedings of the TMCE 2010 Symposium, Tools and Methods of Competitive Engineering, April 12–16, 2010, Ancona, Italy, ed. by I. Horváth, F. Mandorli and Z. Ruák.

Personal copy


Optimizations for Tensorial Bernstein-Based Solvers by using Polyhedral Bounds

C. Fuenfzig, D. Michelucci, S. Foufou. Optimizations for Tensorial Bernstein-Based Solvers by using Polyhedral Bounds, International Journal of Shape Modeling, Vol. 16, Nr. 1-2 (2010) , p. 109-128.

Personal copy of TMCE 2010 conference.

LINK

The tensorial Bernstein basis for multivariate polynomials in n variables has a number 3n of functions for degree 2. Consequently, computing the representation of a multivariate polynomial in the tensorial Bernstein basis is an exponential time algorithm, which makes tensorial Bernstein-based solvers impractical for systems with more than n = 6 or 7 variables. This article describes a polytope (Bernstein polytope) with a number O(n2) of faces, which allows to bound a sparse, multivariate polynomial expressed in the canonical basis by solving several linear programming problems. We compare the performance of a subdivision solver using domain reductions by linear programming with a solver using a change to the tensorial Bernstein basis for domain reduction. The performance is similar for n = 2 variables but only the solver using linear programming on the Bernstein polytope can cope with a large number of variables. We demonstrate this difference with two formulations of the forward kinematics problem of a Gough-Stewart parallel robot: a direct Cartesian formulation and a coordinate-free formulation using Cayley-Menger determinants, followed by a computation of Cartesian coordinates. Furthermore, we present an optimization of the Bernstein polytope-based solver for systems containing only the monomials xi and xi2. For these, it is possible to obtain even better domain bounds at no cost using the quadratic curve (xi,xi2) directly.


Polytope Based Computations of Polynomials Range

Polytope-Based Computation of Polynomial Ranges, by C. Fuenfzig, D. Michelucci and S. Foufou, in SAC’10: Proceedings of the 2010 ACM Symposium on Applied Computing, March 22-26, 2010, pg 1247-1252.

The computation of the range of multivariate polynomials over the unit hypercube (or any box) is reduced to minimizing and maximizing on various polytopes, with Linear Programming methods. Many polytopes are defined and compared.

Personal copy


Using the witness method to detect rigid subsystems of geometric constraints in CAD

Using the witness method to detect rigid subsystems of geometric constraints in CAD by Dominique Michelucci, Simon Thierry, Pascal Schreck, Christoph Fuenfzig, Jean-David Génevaux, Symposium on Solid and Physical Modeling, Haifa, Israel, 2010.

Our own copy

LINK

This paper deals with the resolution of geometric constraint systems encountered in CAD-CAM. The main results are that the witness method can be used to detect that a constraint system is over-constrained and that the computation of the maximal rigid subsystems of a system leads to a powerful decomposition method. In a first step, we recall the theoretical framework of the witness method in geometric constraint solving and extend this method to generate a witness. We show then that it can be used to incrementally detect over-constrainedness. We give an algorithm to efficiently identify all maximal rigid parts of a geometric constraint system. We introduce the algorithm of W-decomposition to identify all rigid subsystems: it manages to decompose systems which were not decomposable by classical combinatorial methods.


What is a line?

Dominique Michelucci. What is a line? International Workshop on Automated Deduction in Geometry 2010 pp 132–151

The playground is the projective complex plane. The article shows that usual, naive, lines are not all lines. From naive lines (level 0), Pappus geometry creates new geometric objects (circles or conics) which can also be considered as (level 1) lines, in the sense that they fulfil Pappus axioms for lines. But Pappus theory also applies to these new lines. A formalization of Pappus geometry should enable to automatize these generalizations of lines.

LINK   LINK   Personal copy   LINK   LINK  


Some Lemmas to...

Some lemmas to hopefully enable search methods to find short and human readable proofs for incidence theorems of projective geometry. roceeding ADG’10 Proceedings of the 8th international conference on Automated Deduction in Geometry Pages 118-131 Springer-Verlag Berlin, Heidelberg ©201.

Personal copy

Search methods provide short and human readable proofs, i.e. with few algebra, of most of the theorems of the Euclidean plane. They are less succesful and convincing for incidence theorems of projective geometry, which has received less attention up to now. This is due to the fact that basic notions, like angles and distances, which are relevant for Euclidean geometry, are no more relevant for projective geometry. This article suggests that search methods can also provide short and human readable proofs of incidence theorems of projective geometry with well chosen notions, rules or lemmas. This article proposes such lemmas, and show that they indeed permit to find by hand short proofs of some theorems of projective geometry.


2009


Interrogating witnesses for Geometric Constraint Solving

D. Michelucci, S. Foufou. Interrogating witnesses for Geometric Constraint Solving. SPM ’09 2009 SIAM/ACM Joint Conference on Geometric and Physical Modeling Pages 343-348.

LINK   LINK   Our own copy.

Classically, geometric constraint solvers use graph-based meth- ods to analyze systems of geometric constraints. These meth- ods have intrinsic limitations, which the witness method overcomes. This paper details the computation of a basis of the vector space of the free infinitesimal motions of a typical witness, and explains how to use this basis to inter- rogate the witness for detecting all dependencies between constraints: structural dependencies already detectable by graph-based methods, and also non-structural dependencies, due to known or unknown geometric theorems, which are undetectable with graph-based methods. The paper also discusses how to decide about the rigidity of a witness.


Christoph Fünfzig, Dominique Michelucci, Sebti Foufou. Nonlinear systems solver in floating-point arithmetic using LP reduction

Nonlinear systems solver in floating-point arithmetic using LP reduction by Christoph Fünfzig, Dominique Michelucci, Sebti Foufou. 2009 SIAM/ACM Joint Conference on Geometric and Physical Modeling, pg 123–134.

Our own copy   LINK

This paper presents a new solver for systems of nonlinear equations. Such systems occur in Geometric Constraint Solving, e.g., when dimensioning parts in CAD-CAM, or when computing the topology of sets defined by nonlinear inequalities. The paper does not consider the problem of decomposing the system and assembling solutions of subsystems. It focuses on the numerical resolution of well-constrained systems. Instead of computing an exponential number of coefficients in the tensorial Bernstein basis, we resort to linear programming for computing range bounds of system equations or domain reductions of system variables. Linear programming is performed on a so called Bernstein polytope: though, it has an exponential number of vertices (each vertex corresponds to a Bernstein polynomial in the tensorial Bernstein basis), its number of hyperplanes is polynomial: O(n2) for a system in n unknowns and equations, and total degree at most two. An advantage of our solver is that it can be extended to non-algebraic equations. In this paper, we present the Bernstein and LP polytope construction, and how to cope with floating point inaccuracy so that a standard LP code can be used. The solver has been implemented with a primal-dual simplex LP code, and some implementation variants have been analyzed. Furthermore, we show geometric-constraint-solving applications, as well as numerical intersection and distance computation examples.


2008


Floating-point geometry: Towards guaranteed geometric computations with approximate arithmetics?

BAJARD, Jean-Claude, LANGLOIS, Philippe, MICHELUCCI, Dominique, et al. Floating-point geometry: toward guaranteed geometric computations with approximate arithmetics. In : Advanced Signal Processing Algorithms, Architectures, and Implementations XVIII. SPIE, 2008. p. 165-176.

ABSTRACT. Geometric computations can fail because of inconsistencies due to floating-point inaccuracy. For instance, the computed intersection point between two curves does not lie on the curves: it is unavoidable when the intersection point coordinates are non rational, and thus not representable using floating-point arithmetic.

A popular heuristic approach tests equalities and nullities up to a tolerance ϵ. But transitivity of equality is lost: we can have A B and B C, but A ⁄≈ C (where A B means A-B< ϵ for A,B two floating-point values). Interval arithmetic is another, self-validated, alternative; the difficulty is to limit the swell of the width of intervals with computations. Unfortunately interval arithmetic cannot decide equality nor nullity, even in cases where it is decidable by other means.

A new approach, developed in this paper, consists in modifying the geometric problems and algorithms, to account for the undecidability of the equality test and unavoidable inaccuracy. In particular, all curves come with a non-zero thickness, so two curves (generically) cut in a region with non-zero area, an inner and outer representation of which is computable. This last approach no more assumes that an equality or nullity test is available. The question which arises is: which geometric problems can still be solved with this last approach, and which cannot?

This paper begins with the description of some cases where every known arithmetic fails in practice. Then, for each arithmetic, some properties of the problems they can solve are given. We end this work by proposing the bases of a new approach which aims to fulfill the geometric computations requirements.

KEYWORDS: Geometric computations, robustness issue, numerical inaccuracy, interval analysis, computational geometry, CAD-CAM, tolerant modeling.

Our own copy


Linear Programming for Bernstein Based Solvers

MICHELUCCI, Dominique et FÜNFZIG, Christoph. Linear programming for Bernstein based solvers. In : International Workshop on Automated Deduction in Geometry. Springer, Berlin, Heidelberg, 2008. p. 163-178.

Abstract. Some interval Newton solvers rely on tensorial Bernstein bases to compute sharp enclosures of multivariate polynomials p(x1,xn) where xi [0,1]. Unfortunately, polynomials become exponential size in tensorial Bernstein bases. This article gives the first polynomial time method to solve this issue. It proposes to resort to Linear Programming, for computing tight ranges of muti-variate polynomials on a given box (interval vector), and for reducing a box while preserving included roots. The underlying Bernstein polytope is defined, it is the feasible set of the LP problems. Its defining inequalities are given by the positivity of relevant Bernstein polynomials. It has an exponential number of vertices but only a polynomial number of hyperplanes and hyperfaces.

LINK


Robustness and Randomness

Dominique Michelucci, Jean-Michel Moreau, Sebti Foufou: Robustness and Randomness. Reliable Implementation of Real Number Algorithms 2008: 127-148. Publisher: Internationales Begegnungs- und Forschungszentrum fuer Informatik (IBFI), Schloss Dagstuhl, Germany. Vol 06021. Editor: Peter Hertling and Christoph M. Hoffmann and Wolfram Luther and Nathalie Revol.

The study of robustness problems for computational geometry algorithms is a topic that has been subject to intensive research efforts from both computer science and mathematics communities. Robustness problems are caused by the lack of precision in computations involving floating-point instead of real numbers. This paper reviews methods dealing with robustness and inaccuracy problems. It discusses approaches based on exact arithmetic, interval arithmetic and probabilistic methods. The paper investigates the possibility to use randomness at certain levels of reasoning to make geometric constructions more robust.

Own copy

LINK


Isometry group, words and proofs of geometric theorems

Dominique Michelucci: Isometry group, words and proofs of geometric theorems. SAC 2008: 1821-1825

This paper shows that considering the group generated by orthogonal symmetries relatively to lines may give very short and readable proofs of geometric theorems. A short and readable proof of the fundamental Pascal’s theorem is pro- vided for illustration.

LINK


2007


A New Robust Algorithm to Trace Curves

Dominique Faudot, Dominique Michelucci: A New Robust Algorithm to Trace Curves. Reliable Computing 13(4): 309-324 (2007)

We are presenting here a new reliable algorithm to trace curves using interval arithmetic. We give several computable criteria which guarantee the convergence of the correction step of the classical predictor-corrector method. Our method avoids, for instance, to jump from a component of the curve to another one; this kind of mistake typically causes inconsistencies in the topology of intersecting surfaces in geometric modelers.

LINK


Modèlisation géométrique par contraintes

This is a survey in French about Geometric Constraints. It is written by Christophe Jermann, Dominique Michelucci, Pascal Schreck. It was published as a chapter in the book: Informatique graphique, modélisation géométrique et animation, editors: Dominique Bechmann and Bernard Péroche, Hermès science publications 2007.

LINK


2006


Geometric constraints solving: some tracks

Geometric constraints solving: some tracks, D. Michelucci, D. Foufou, Loïc Lamarque, Pascal Schreck. Proceedings of the 2006 ACM symposium on Solid and physical modeling, pp 185–196. ISBN 1-59593-358-1, Cardiff, Wales, United Kingdom.

This paper presents some important issues and potential research tracks for Geometric Constraint Solving: the use of the simplicial Bernstein base to reduce the wrapping effect in interval methods, the computation of the dimension of the solution set with methods used to measure the dimension of fractals, the pitfalls of graph based decomposition methods, the alternative provided by linear algebra, the witness configuration method, the use of randomized provers to detect dependences between constraints, the study of incidence constraints, the search for intrinsic (coordinate-free) formulations and the need for formal specifications.


Detecting all dependences in systems of geometric constraints using the witness method

Detecting all dependences in systems of geometric constraints using the witness method. Dominique Michelucci, Sebti Foufou Proceedings of the 6th international conference on Automated deduction in geometry, ADG’06, pp 98–112, Pontevedra, Spain. Springer-Verlag. ISBN 3-540-77355-X, 978-3-540-77355-9.

In geometric constraints solving, the detection of dependences and the decomposition of the system into smaller subsystems are two important steps that characterize any solving process, but nowadays solvers, which are graph-based in most of the cases, fail to detect dependences due to geometric theorems and to decompose such systems. In this paper, we discuss why detecting all dependences between constraints is a hard problem and propose to use the witness method published recently to detect both structural and non structural dependences.We study various examples of constraints systems and show the promising results of the witness method in subtle dependences detection and systems decomposition.


D. Michelucci, S. Foufou. Geometric Constraint Solving: the Witness Configuration Method

D. Michelucci, S. Foufou. Geometric Constraint Solving: the Witness Configuration Method. Computer-Aided Design 38(4): 284-299 (2006).

link   LINK sciencedirect.com   LINK

Geometric constraint solving is a key issue in CAD, CAM and PLM. The systems of geometric constraints are today studied and decomposed with graph-based methods, before their numerical resolution. However, graph-based methods can detect only the simplest (called structural) dependences between constraints; they can not detect subtle dependences due to theorems. To overcome these limitations, this paper proposes a new method: the system is studied (with linear algebra tools) at a witness configuration, which is intuitively similar to the unknown one, and easy to compute.


Another Paradigm for Geometric Constraints Solving

Another Paradigm for Geometric Constraints Solving. Dominique Michelucci, Sebti Foufou, Loïc Lamarque, David Ménégaux. Cacnadian Conf. on Computational Geometry, CCCG 2006, Kingston, Ontario, August 14–16, 2006. pp 169–172.

Geometric constraints solving often relies on graph-based methods to decompose systems of geometric constraints. These methods have intrinsic and unavoidable limitations which are overcome by the witness method presented here.


Interval-based Tracing of Strange Attractors

Dominique Michelucci, Sebti Foufou: Interval-based Tracing of Strange Attractors. Int. J. Comput. Geometry Appl. 16(1): 27-40 (2006)

The method described here relies on interval arithmetic and graph theory to compute guaranteed coverings of strange attractors like Hénon attractor. It copes with infinite intervals, using either a geometric method or a new directed projective interval arithmetic.

LINK conference version in SCAN (Scientific computing, validated numerics, interval methods) 2000


Incidence Constraints: A Combinatorial Approach

Dominique Michelucci , Pascal Schreck: Incidence Constraints: A Combinatorial Approach, Int. J. Comput. Geometry Appl. 16(5-6): 443-460 (2006).

The simplest geometric constraints are incidences between points and lines in the projective plane. Surprisingly, this problem is universal, in the sense that all algebraic systems reduce to such geometric constraints. Detecting dependences between these geometric constraints is NP-complete. New methods to prove incidence theorems are proposed; they use strictly no computer algebra but only combinatorial arguments.

Keywords: Projective geometry, incidence constraint, combinatorial proof, hexamys, compatible matroid.

preprint


2005


Bernstein Based Arithmetic Featuring de Casteljau

Bernstein Based Arithmetic Featuring de Casteljau. Dominique Michelucci, Sebti Foufou, Loïc Lamarque, David Ménégaux. Canadian Conf. on Computational Geometry, CCCG 2005, pp. 215-218.

Bernstein based interval analysis permits to trace alge- braic curves and surfaces. In this paper, we propose to use the classical de Casteljau algorithm to improve the efficiency of the Bernstein based method. The proposed tracing method gives signicant results with functions of high degree. These results are illustrated and com- pared with other interval analysis approaches.


2004


D. Michelucci, S. Foufou. Using Cayley-Menger Determinants for Geometric Constraint Solving

D. Michelucci, S. Foufou. Using Cayley-Menger Determinants for Geometric Constraint Solving. ACM Symposium on Solid Modeling and Applications, in Genova, 2004. G. Elber, N. Patrikalakis, P. Brunet (Editors). pp 285 - 290. Eurographics Association Aire-la-Ville, Switzerland.

We use Cayley-Menger Determinants (CMDs) to obtain an intrinsic formulation of geometric constraints. First, we show that classical CMDs are very convenient to solve the Stewart platform problem. Second, issues like distances between points, distances between spheres, cocyclicity and cosphericity of points are also addressed. Third, we extend CMDs to deal with asymmetric problems. In 2D, the following configurations are considered: 3 points and a line; 2 points and 2 lines; 3 lines. In 3D, we consider: 4 points and a plane; 2 points and 3 planes; 4 planes.


An implementation of Jurzak’s prover

An implementation of Jurzak’s prover, Dominique Michelucci, Jean-paul Jurzak, isiCAD: Constraint-based Approaches and Methods of Mathematical Modelling for Intelligent CAD/CAM/CAE systems: From Methods to Applications, 2004.

Abstract. This text reports on several implementations of the Jurzak’s prover of geometric theorems, and how this prover, which is initially intended for pedagogical purposes, can be and should be used in geometric constraints solvers of CAD/CAM geometric modellers.

PDF


2001


The Ellipsoidal Skeleton in Medical Applications

F. Banégas, M. Jaeger, D. Michelucci, M. Roelens. The Ellipsoidal Skeleton in Medical Applications. In Sixth Solid Modeling 2001, Ann Arbor, Michigan, June 4-8, 2001.

Personal PDF Slides PDF.

Medical imaging needs software tools for extracting relevant informations for diagnosis (eg detection of fractures, tumors, anatomical deformations), for automatic reconstruction of the surface of studied organs (not directly provided by 3D medical images), for interactive visualization with semantic zooms and multiresolution capability, for automatic recognition and for template matching between medical shapes. Our paper will describe such a tool, called ellipsoidal skeleton or E-skeleton for short.

From segmented 3D medical image, we built with dynamic clustering a binary tree of classes: level n in the tree (1 for the root) is the best partition of the set of voxels of the studied organ into n classes: this partition maximizes the difference between classes (interclass variance) and the homogenity of each and every class (intraclass variance).

For a given level in the tree, approximating each class with the best ellipsoid, and merging all ellipsoids with an implicit surface, using Blinn technique, permits to reconstruct the surface of the studied organ. To improve the quality of the reconstruction, a deformation based on the modal vibration model is applied to each ellipsoid: best deformation parameters are computed with the tabu method. The description of this last surface is much more compact (several thousands times) than the initial 3D image.

The tree of classes turns out to be very robust: the same organ of various patients give isomorphic trees. It permits automatic matching and recognition of medical shapes. For instance, four examinations of the eight carpal bones in the wrist have been automatically matched, without mistakes. Idem for the teeth of several dental archs. Relevant differences are also automatically detected, like for instance an asymmetry between the left and right incisive of the same patient.

The E-skeleton tool is part of a commercial medical software: Corpus2000, developed by CIRAD.


Reliable representations of strange attractors

MICHELUCCI, Dominique. Reliable representations of strange attractors. In : Scientific computing, validated numerics, interval methods. Springer, Boston, MA, 2001. p. 379-390.

The method described here relies on interval arithmetic and graph theory to compute guaranteed coverings of strange attractors like Hénon attractor. It copes with infinite intervals, using either a geometric method or a new directed projective interval arithmetic.

LINK   LINK: the conference version


2000


Hierarchical Pattern Analysis and Recognition using Ellipsoidal Skeleton

F. Banégas, F. Canovas, D. Michelucci, M. Roelens, M. Jaeger. Hierarchical Pattern Analysis and Recognition using Ellipsoidal Skeleton. Fourth International Conference in Computer Graphics and Artificial Intelligence, 3IA2000, Limoges, France.

Personal PDF   LINK

The amount of data needed to describe both volume and surface of 3D objects is often huge and produces bottlenecks at every step of analysis. Thus, extracting relevant information in this case demands heavy and complex processing techniques. A preprocessing phase is dramatically required : data must be synthesized to accelerate analysis procedure by providing context-dependen measurements. Parameters set should not be rigid, as it can be radically different from one application to another. The method we propose in this paper consists in transforming any digitized 3D solid – taking into account its inner points – into Ellipsoidal Skeleton (or E-skeleton). Based on binary shape decomposition into a union of simple sub-shapes paradigm, it also gathers relevant information about the geometry and any other set of values that seems interesting, depending on the study context. Each sub-shape and its parameters set is more generically viewed as a feature, which is assumed to be nondecomposable at a given semantic level. This semantic zoom capability for object description permits a hierarchical approach, i.e. a scale of vision control. Low semantic zoom allows crude approximation for fast pre-classification while high semantic zoom highlights finer details for precise comparison. From this prepr ocessing stage, tasks such as object recognition and object analysis are made possible and intuitive via feature comparison. Any bottleneck is removed, ensuring fast data processing. At last, the generic structure of the E-skeleton allows not only future improvement but also practical ways to add features to the E-skeleton repertoire.


A zero-free exact arithmetic

D. Michelucci, J-M. Moreau. ZEA : A zero-free exact arithmetic . 12th Canadian Conf. Computational Geometry 2000. August 16-19 2000, Fredericton, New Brunswick, Canada.

Personal copy


Reliable Representations of Strange Sets (Fractals)

D. Michelucci. Reliable Representations of Strange Sets. Oral Communication at SCAN2000 (GAMM-IMMACS Intermational Symposium on Scientific Computing, Computer Arithmetic, and Validated Numerics. jointly with Interval 2000: International Conference on Interval Methods in Science and Engineering), september 18–22, 2000, Karlshrue, Germany.

Personal copy

LINK


1999


An introduction to the robustness issue

D. Michelucci. An introduction to the robustness issue. Swiss Conference of CAD/CAM , Neuchâtel, Switzerland February 22 - 24, 1999, pg 214–221

Personal PDF.


Automatic Extraction of Significant Features from 3D Point Couds by Ellipsoidal Skeleton

Automatic extraction of significant features from 3D point clouds by ellipsoidal skeletons. Applications in vision and geometric characterization. Mudur, S. P.Shikhare, D.Encarnacao, J. L.Rossignac, J. (Ed). Proceedings of Proceedings of the International Conference in Visual Computing 1999. Goa, Inde : 58-67.Proceedings of the International Conference in Visual Computing 1999, 23-26/02/1999, Goa, Inde.

LINK

Résumé : We present a robust method for automatically constructing an ellipsoidal skeleton (e-skeleton) from a set of 3D points coming from segmented NMR or TDM images. To ensure steadiness and accuracy, all points of the objects are taken into account, including the inner ones. This skelelon will be essentially useful for object characterization, for comparisons between various measurements and as a basis for deformable models. It also provides good initial guess for surface reconstruction algorithms. On output of the entire process, we obtain an analytical description of the chosen entity, semantically zoomable (local features only or reconstructed surfaces), with any level of detail (LOD) by discretization step control in voxel or polygon format. This capability allows us to handle objects at interactive frame rates once the e-skeleton is computed. Each e-skeleton is stored as a multiscale CSG implicit tree. Applications cover a wide range in computer graphics, from CAD to medical imaging.


Hierarchical automated clustering of cloud point set

Banégas, F., Michelucci, D., Roelens, M., Jaeger, M., Canovas, F., 1999. Hierarchical automated clustering of cloud point set by ellipsoidal skeleton. Application to organ geometric modeling from CT-scan images. Hanson, K. M. (Ed). Proceedings of Medical Imaging 1999 - Image Processing. San Diego, USA : 1227-1237.Medical Imaging 1999 - Image Processing, 20-26/02/1999, San Diego, USA.

LINK   Personal PDF

We present a robust method for automatically constructing an ellipsoidal skeleton (e-skeleton) from a set of 3D points taken from NMR or TDM images. To ensure steadiness and accuracy, all points of the objects are taken into account, including the inner ones, which is different from the existing techniques. This skeleton will be essentially useful for object characterization, for comparisons between various measurements and as a basis for deformable models. It also provides good initial guess for surface reconstruction algorithms. On output of the entire process, we obtain an analytical description of the chosen entity, semantically zoomable (local features only or reconstructed surfaces), with any level of detail (LOD) by discretization step control in voxel or polygon format. This capability allows us to handle objects at interactive frame rates once the e-skeleton is computed. Each e-skeleton is stored as a multiscale CSG implicit tree.


Ellipsoidal Skeleton for Multi-Scaled Solid Reconstruction

F. Banégas, D. Michelucci, M. Roelens, M. Jaeger. Ellipsoidal Skeleton for Multi-Scaled Solid Reconstruction in Swiss Conference of CAD/CAM , Neuchâtel, Switzerland February 22 - 24, 1999, pg 33–40

Personal PDF.


Automatic Adaptative Surface Reconstruction from Ellipsoidal Skeleton

Banégas, F., Michelucci, D., Roelens, M., Jaeger, M., 1999. An automatic adaptative surface reconstruction from ellipsoidal skeleton. Hugues, J.Schlick, C. (Ed). Proceedings of Proceedings if the fourth International workshop on implicit surfaces. Talence, France : 113-122.Proceedings if the fourth International workshop on implicit surfaces, 13-15/09/1999, Talence, France.

LINK


Convex Hull of Grid Points below a Line or a Convex Curve

H. Balza-Gomez, J-M. Moreau, D. Michelucci. Convex Hull of Grid Points below a Line or a Convex Curve . In DGCI 99, Discrete Geometry for Computer Imagery, Noisy-le-Grand, France, March 17-19, 1999, pg 361–374.

Consider a finite non-vertical, and non-degenerate straight-line segment s = [s0;s1] in the Euclidian plane. We give a method for constructing the boundary of the upper convex hull of all the points with integral coordinates below (or on) s, with abscissa in [x(s0);x(s1)]. The algorithm takes O(log n) time, if n is the length of the segment. We next show how to perform a similar construction in the case where s is a finite, non-degenerate, convex arc on a quadric curve. The associated method runs in O(k log n), where n is the arc’s length and k the number of vertices on the boundary of the resulting hull. This method may also be used for a line segment; in this case, k = O(log n), and the second method takes O(k2) time, compared with O(k) for the first.

Personal PDF LINK


The Periodicity of Integral Convex Hulls for Conics in 2

H. Balza-Gomez, D. Michelucci, J. M. Moreau. The Periodicity of Integral Convex Hulls for Conics in R2 . 11th Canadian Conference on Computational Geometry. pg. 26-30. August 15-18, 1999. University of British Columbia, Vancouver, BC, Canada.

Personal PDFLINK


1997


Lazy Arithmetic

D. Michelucci, J-M. Moreau. Lazy Arithmetic IEEE Transactions on Computers, 961–975, volume 46, number 9, September1997.

Personal copy

Finite-precision leads to many problems in geometric methods from CAD or Computational Geometry. Until now, using exact rational arithmetic was a simple, yet much too slow solution to be of any practical use in real-scale applications. A recent optimization – the lazy rational arithmetic ([4]) – seems promising: It defers exact computations until they become either unnecessary (in most cases) or unavoidable; in such a context, only indispensable computations are performed exactly, that is: Those without which any given decision cannot be reached safely using only floating-point arithmetic. This paper takes stock of the lazy arithmetic paradigm: Principles, functionalities and limits, speed, possible variants and extensions, difficulties, problems solved or left unresolved.


A quadratic non standard arithmetic

D. Michelucci. A quadratic non standard arithmetic Ninth Canadian Conference on Computational Geometry, pg 123–128, 1997.

Personal PDF copy.

This extended abstract presents an exact real quadratic arithmetic which also handles infinitely small numbers. The quadratic arithmetic provides the same operations than an exact rational one, plus square root of non negative numbers. It computes in the real quadratic closure of Q, noted here: Q. As for an example, such an arithmetic can be used to compute the 2D arrangement of a set of circles and lines, or the 3D arrangement of a set of spheres and planes. The quadratic arithmetic is classic and presented in section 2. The new part is the way infinitely small numbers are managed in the quadratic framework. In Computational Geometry, infinitely small numbers are typically used to symbolically perturb input parameters [2, 11, 10, 3, 7]: this infinitesimal perturbation removes accidental dependencies between data, and thus eliminates geometric degeneracy (alignment of more than two points, cocircular...


Qualitative Study of Geometric Constraints

H. Lamure, D. Michelucci. Qualitative Study of Geometric Constraints Workshop on Geometric Constraint Solving and Applications, Technical University of Ilmenau, Germany, Sept 26th, 1997, Beat Bruderlin and Dieter Roller Editors, pg 134–145.

Also published as a chapter in : Geometric Constraint Solving and Applications, B. Brüderlin/ D. Roller, eds., Springer-Verlag, 1998.

Personal PDF. Personal Postscript File.


Bridging the Gap Between CSG and Brep via a Triple Ray Representation

Mohand Ourabah Benouamer, Dominique Michelucci: Bridging the Gap Between CSG and Brep via a Triple Ray Representation. Symposium on Solid Modeling and Applications 1997: 68-79

Computing intersections between algebraic surfaces is an essential issue for Brep-based modellers, and a very difficult one. Most of the time, existing methods are not reliable, and reliable ones are hairy. We think there is another and simple-minded way which avoids this problem without loss of practicalities. The key idea is computing a triple ray representation by z-buffer, raytracing or whichever method, and then using the popular marching cubes algorithm with some local improvements (for instance to recognize sharp edges, or vertices.

Personal PDF copy. Photos are missing.

Personal Postscript copy. Photos are missing.


1996


Solving Geometric Constraints By Homotopy

Hervé Lamure,Dominique Michelucci. Solving Geometric Constraints By Homotopy. IEEE Trans. on Visualization and Computer Graphics. Vol 2, Issue 1, March 1996, pp 28–34.

Geometric modeling by constraints yields systems of equations. They are classically solved by Newton- Raphson’s iteration, from a starting guess interactively provided by the designer. However, this method may fail to converge, or may converge to an unwanted solution after a ‘chaotic’ behaviour. This paper claims that, in such cases, the homotopic method is much more satisfactory. LINK


Arithmetic issues in geometric computations

D. Michelucci. Arithmetic issues in geometric computations. 2nd conference on Real Numbers and Computers. Marseille, France. April 1996. Pg 43–69.

Personal PDF copy.

This paper first recalls by some examples the damages that the numerical inaccuracy of the floating-point arithmetic can cause during geometric computations, and it intends to explain why damages for geometric computations differ from those met in numerical computations. Then it surveys the various approaches proposed to overcome inaccuracy difficulties; conservative approaches use classical geometric methods but with ‘exotic’ arithmetics instead of the standard floating-point one; radical ones go farther and reject classical techniques, considering them not robust enough against inaccuracy. 1 Introduction Geometric modellers provided by commercial CADCAM softwares, and methods from the more theoretical field of Computational Geometry all perform geometric computations: for instance triangulating or meshing geometric domains for finite elements simulation, or computing intersections between geometric objects. Inaccuracy is a crucial issue for geometric computations. Not only the numer...


1995


An Epsilon-Arithmetic for Removing Degeneracies

D. Michelucci. An Epsilon-Arithmetic for Removing Degeneracies. IEEE Symposium on Computer Arithmetic, pages 230–237, Bath, July 1995.

Personal PDF.

Symbolic perturbation by infinitely small values removes degeneracies in geometric algorithms and enables programmers to handle only generic cases: there are a few such cases, whereas there are an overwhelming number of degenerate cases. Current perturbation schemes have limitations, presented below. To overcome them, this paper proposes to use an ϵ arithmetic, i.e. to represent in an explicit way infinitely small numbers, and to define arithmetic operations (+,-,×,∕,<,=) on them.


1994


Résolution de contraintes géométriques par homotopie

PDF, in French

La modélisation géométrique par contraintes conduit à des systémes d’équations qui sont généralement résolus par l’algorithme de Newton-Raphson (ou une de ses variantes), à partir d’une situation fournie interactivement par l’utilisateur. Mais, trop souvent, cette méthode ne converge pas, ou converge vers une solution non voulue après un parcours chaotique. L’application d’une méthode homotopique dans de tels cas donne des résultats bien plus satisfaisants.


Error-Free Boundary Evaluation Based on a Lazy Rational Arithmetic

Error-Free Boundary Evaluation Based on a Lazy Rational Arithmetic: A Detailed Implementation. M.O. Benouamer, D. Michelucci, and B. Péroche. Computer-Aided Design 26(6): 403-416 (1994)

Personal copy.

A new boundary evaluation method is presented. It is based on error-free Boolean operations on polyhedral solids. We describe, in detail, an intersection algorithm that handles, in a straightforward way, all the possible geometric cases. We also describe a general data structure that allows an unified storage of solid boundaries. The intersection algorithm always runs to completion, producing consistent solids from consistent operands. Numerical errors are handled at an algorithm independent level: an original exact arithmetic that performs only the necessary precise computations. Results from our implementation of this CSG solver are discussed.


Hashing lazy numbers

Mohand Ourabah Benouamer, P. Jaillon, Dominique Michelucci, Jean-Michel Moreau: Hashing lazy numbers. Computing 53(3-4): 205-217 (1994)

Personal copy. LINK   LINK   LINK   LINK

This paper describes an extension of the lazy rational arithmetic (LEA). A lazy arithmetic is an optimized version of the usual exact arithmetics used in Symbolic Calculus, in Computational Geometry or in many other fields. We present a method originating from modular arithmetic for storing lazy numbers in hash-tables. This method uses results from the wellstudied technique of hash coding to compute efficient keys for lazy numbers. In fact, such keys may be used to hash code lazy numbers, or data containing lazy numbers, such as vertices or line segments in Computational Geometry.


1993


Reduction Of Constraint Systems

Reduction Of Constraint Systems. Samy Ait-aoudia , Roland Jegou , Dominique Michelucci. COMPUGRAPHICS 1993. LINKLINK   LINK   LINK

Geometric modeling by constraints leads to large systems of algebraic equations. This paper studies bipartite graphs underlaid by systems of equations. It shows how these graphs make possible to polynomially decompose these systems into well constrained, over-, and underconstrained subsystems. This paper also gives an efficient method to decompose well constrained systems into irreducible ones. These decompositions greatly speed up the resolution in case of reducible systems. They also allow debugging systems of constraints.


A Lazy Solution To Imprecision In Computational Geometry

M.O. Benouamer, P. Jaillon, D. Michelucci, and J-M. Moreau. A Lazy Solution To Imprecision In Computational Geometry. In Proceedings of the 5th Canadian Conference on Computational Geometry. Waterloo, Canada, August 5-9, pages 73–78, 1993.

Personal PDF copy. Personal PS copy.


A Lazy Arithmetic Library

M.O. Benouamer, P. Jaillon , D. Michelucci, and J-M. Moreau. A Lazy Arithmetic Library. In Proceedings of the IEEE 11th Symposium on Computer Arithmetic. pages 242–249, Windsor, Ontario, June 30-July 2, 1993.

Personal PDF copy.


Error-free boundary evaluation using lazy rational arithmetic: a detailed implementation

M. Benouamer, D. Michelucci, B. Péroche. Proceeding SMA ’93 Proceedings on the second ACM symposium on Solid modeling and applications Pages 115 - 126 ACM New York, NY, USA ©199.

LINK    LINK   LINK

A new boundary evaluation method is presented. It is based on error-free Boolean operations on polyhedral solids. We describe, in detail, an intersection atgorithm that handles on a straightforward way, all the possible geometric cases. We also describe a general data structure that allows on unified storage of solid boundaries. The intersectwn algorithm always runs to completion, producing consistent solids from consistent operands. Numerical errors are handled al an algorithm independent level: an original exact arithmetic that performs only the necessary precise computations. Results from our implementation of this CSG solver are discussed.


1984


D. MICHELUCCI, M. GANGNET: Saisie de Plans à Partir de Tracés à Main Levée; Proceeding of MICAD’84, Paris, 1984, 96–110. Prentice-Hall, 1972, 1–32.