Aside from OSM specifics, performance friendly formats for spatial data that support spatial indexing can make huge impact on usability and productivity of applications. e.g. trying to view a large dataset in QGIS that has been saved as KMZ (zipped XML) can make QGIS basically hang for minutes, while the same dataset saved as something like flatgeobuf [1] can be loaded instantly.
My guess is that one of the fundamental differences there would be that KMZ isn't streamable and needs to be fully loaded into memory and then transformed into whatever structure qgis uses internally, but I'm not totally sure about that and haven't used QGIS in a minute. I feel like I've also had bad luck loading KMZ/KML of any reasonable complexity into any other GIS app.
Tangentially related question for any of you GIS people who might be lurking in this thread:
Can anyone recommend me a method of meshing LIDAR point clouds? The sparseness of the data on building walls & other near-vertical surfaces combined with a lack of point normals leads to degenerate solutions with all the common approaches (poisson/ball pivot/vcg in meshlab) not to mention extremely slow perf. Tree canopies and overhanging parapets make a simple heightmap approach less-than desirable (though ultimately acceptable if I can't find anything better). I'm trying to turn 90 billion lidar points into maybe 30-50 million triangles, hopefully without spending months developing a custom pipeline.
I think Meshroom can use LIDAR data as an input now? I used it years ago for photogrammetry and camera tracking for some VFX work and it's an incredibly solid suite of open source tools for these types of tasks.
I was curious about the spec of this new GOB format, there's a comment down thread explaining that there isn't a spec yet but discussing some of the details of the format: https://community.openstreetmap.org/t/new-osm-file-format-30...
Aside from OSM specifics, performance friendly formats for spatial data that support spatial indexing can make huge impact on usability and productivity of applications. e.g. trying to view a large dataset in QGIS that has been saved as KMZ (zipped XML) can make QGIS basically hang for minutes, while the same dataset saved as something like flatgeobuf [1] can be loaded instantly.
[1] https://flatgeobuf.org/
My guess is that one of the fundamental differences there would be that KMZ isn't streamable and needs to be fully loaded into memory and then transformed into whatever structure qgis uses internally, but I'm not totally sure about that and haven't used QGIS in a minute. I feel like I've also had bad luck loading KMZ/KML of any reasonable complexity into any other GIS app.
How's geojson of the same data?
Sometimes with QGIS the best thing you can do is load up the stuff into Postgres, just orders of magnitude of perf improvements
Does this use the new OSM data model?
https://media.jochentopf.com/media/2022-08-15-study-evolutio...
https://github.com/osmlab/osm-data-model
https://blog.openstreetmap.org/2023/01/04/reminder-call-for-...
Resolving the coordinates to node references in current data model is such a nuisance as it's slow and requires lots of RAM.
Tangentially related question for any of you GIS people who might be lurking in this thread:
Can anyone recommend me a method of meshing LIDAR point clouds? The sparseness of the data on building walls & other near-vertical surfaces combined with a lack of point normals leads to degenerate solutions with all the common approaches (poisson/ball pivot/vcg in meshlab) not to mention extremely slow perf. Tree canopies and overhanging parapets make a simple heightmap approach less-than desirable (though ultimately acceptable if I can't find anything better). I'm trying to turn 90 billion lidar points into maybe 30-50 million triangles, hopefully without spending months developing a custom pipeline.
https://3dbag.nl/ might be worth a try. This project reconstructed and maintains building models of 11 million buildings in the Netherlands.
It combines airborne LiDAR and building footprints, it's OS (https://github.com/3DBAG) with the reconstruction pipeline here: https://github.com/3DBAG/roofer.
I think Meshroom can use LIDAR data as an input now? I used it years ago for photogrammetry and camera tracking for some VFX work and it's an incredibly solid suite of open source tools for these types of tasks.
My opinion: Without support in libosmium and GDAL, this will remain a marginal phenomenon.
is there some reason to believe they will not support it?
because otherwise this is true of all new ̶s̶p̶e̶c̶s̶ edit: ideas (this isn't even a finished spec), and it implies absolutely nothing.
Does it work with osmium?
Not yet - it was just introduced and there isn’t even a full spec yet