Extract central skeleton algorithm

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

Extract central skeleton algorithm

mukoki
Hi,

I read the excellent article on central-skeleton at
https://github.com/orbisgis/h2gis/wiki/3.1-Extract-central-skeleton

I've almost finished to write an implementation as a plugin for OpenJUMP.
First, I want to thank you and to make sure that such an implementation
does not infringe your license (anyway, I don't use h2GIS source code,
but I relied on your excellent article).

Here after are a few remarks about the article
Step 1 : you noted "Epsilon is the merge distance of two close points", but it seems you use SimplifyPreservingTopology function which uses a Douglas-Peucker which is not based on a minimum distance between consecutive points but on a maximum Hausdorff distance between the original and the simplified lines.
Step 3 : you noted "Union and simplify skeleton". I can see a merge, but no simplification in the code. Also I wonder why you merge then explode. I think that edges get from the voronoi diagram are simple segments ending at intersections, so that merge/explode should do almost nothing (I did not test though, maybe I missed something)
Step 4 : to perform the graph filtering, I used jgrapht-sna. Any reason why you have implemented your own java-network-analyzer. Has java-network-analyzer other dependances than jgrapht ?

Thanks and keep the good work !  
Reply | Threaded
Open this post in threaded view
|

Re: Extract central skeleton algorithm

mukoki
After some more tests :

Step 3 : ok, I understood why merge is useful (after the elimination of voronoi edges intersecting the polygon boundary).

After some tests on big polygons and big datasets, I realized that BetweennessCentrality is very cpu intensive and that it will probably not be able to process big datasets if I need to densify them up to 1 or 2 m.

I would like to compare the method with simpler one like iterative elimination of ending edges, which should be faster.
Reply | Threaded
Open this post in threaded view
|

Re: Extract central skeleton algorithm

nicolas-f
Administrator
In reply to this post by mukoki
Hi,

mukoki wrote
I've almost finished to write an implementation as a plugin for OpenJUMP.
First, I want to thank you and to make sure that such an implementation
does not infringe your license (anyway, I don't use h2GIS source code,
but I relied on your excellent article).
No problems, We are pleased to have been of any help for your work.


mukoki wrote
Step 1 : you noted "Epsilon is the merge distance of two close points", but it seems you use SimplifyPreservingTopology function which uses a Douglas-Peucker which is not based on a minimum distance between consecutive points but on a maximum Hausdorff distance between the original and the simplified lines.
Yes you are true, we will update the tutorial.



mukoki wrote
Step 4 : to perform the graph filtering, I used jgrapht-sna. Any reason why you have implemented your own java-network-analyzer. Has java-network-analyzer other dependances than jgrapht ?
I will let my colleague answer this question since I was not involved in the development of java-network-analyzer.


mukoki wrote
After some tests on big polygons and big datasets, I realized that BetweennessCentrality is very cpu intensive and that it will probably not be able to process big datasets if I need to densify them up to 1 or 2 m.

I would like to compare the method with simpler one like iterative elimination of ending edges, which should be faster
Yes centrality computation is a cpu intensive operation.

Best regards
Reply | Threaded
Open this post in threaded view
|

Re: Extract central skeleton algorithm

ebocher
Administrator
Hi mukoki ;-)

It's a pleasure to see you here.
Thanks for the comments and feel free to use H2 network in OpenJump.
We will fix the documentation asap.

Please find more details about H2Network functions and the JNA library here :
https://halshs.archives-ouvertes.fr/halshs-01133333

Cheers

Erwan