Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
3MF file format and why it’s great (2019) (prusaprinters.org)
82 points by danboarder on Nov 22, 2021 | hide | past | favorite | 85 comments


I wouldn't call 3MF great. The format is text based. This means if you have 1GB model, you gonna have to print 1GB of float numbers when saving the model, and parse 1GB of float numbers when loading.

Text formats are good when human readable, but no sane person gonna read 1GB of data. Not even 100MB. Both numbers are reasonable now for very high-poly models.

They should have designed a true binary format, instead of zipped XML. Speaking of the STL format, it can be either binary or text depending on the header. When they have measured size, pretty sure they have used text variant of STL, otherwise they'd get similar sizes.

Also, the format is rather complex. One of the reasons why STL is so popular, only needs couple pages of code to support, almost regardless on the programming language. For optimal compatibility, one needs the official C++ only lib3mf library to read/write 3MF files.


I know that it is counterintuitive, but storing doubles in binary format can take (for many use cases) much more space than storing the text. Each double, uncompressed is 8 bytes, 3.2e-3 is only 6 bytes, a 1 digit integer is 1 byte, a 5 digit integer is 3 bytes less than the double to store it... Also, when humans enter data in some specific units they want to see the same number back (and not a rounded -in binary- version of it, although very good algorithms exist to that end). Text compression of number data is extremely efficient. Having cross-platform binary formats is complex and binary formats go stale much sooner. Having humans peek at the file in text mode and debut or use simple command line tools with copied and pasted data is also useful for exploration purposes. The best of both worlds is to have a text representation coupled with one-to-one binary encoding; but as I pointed out, many times, the binary format actually takes, unintuitively, more space...


Part of serialization is getting to pick how you encode numbers. And, one should do so with a good match on the range or required precision. I'm not sure double values are required here or necessary.

> Text compression of number data is extremely efficient

Of course it isn't. It is arguably one of the worst there is. It can only ever by more efficient when representing a very low subset of values and the alternative binary representation is exceedingly poor. But those constraint would make the whole discussion kinda silly.

Serializing text, is after all binary encoding data. If you follow this with compression algorithms, you'll likely not end up too far away in either case.

Just take the example you gave. "A 5 digit integer is 3 bytes less than the double to store it". Yes, true. However, a 5 digit integer can also be represented by 12 bits.

And you can even pack them if this is all you really care about (there is a good reason why you wouldn't want to do this, as word alignment is much more important, as you would be able to directly map data to memory allocations, but let's for argument sake say that storage was king). For each 5 digit numbers as a text file, you'll need another byte to encode the line break or delimiter. I.e. 6 bytes instead of 12 bits, or 4 times wasted space. For a million values, 5.7 MiB instead of 1.4 MiB.


> It can only ever by more efficient when representing a very low subset of values and the alternative binary representation is exceedingly poor.

In almost all cases in practice one is only serializing a very low subset of values. A 1 TB file, no matter how efficient the encoding, could only contain serialization of less than 0.00005% of the possible IEEE 754 double values.


That is of course true. My point was to highlight the kind of situations where straight serialization of text would be somewhat closer in required storage, when compared to similar serialization of number values. That serializations of such numbers would only encode a subset of the values it can represent, I would say, is besides the point. It is after all how number representations work.

To reiterate my original point: It is never the case that serializing text that represents numbers is the most efficient representation. The posts here seem to either suggest or think this is the case. And this was only the size consideration. It is even worse in terms of processing. You need to do O(n) operations on the length of text data just in order to decode the values. A step that is entirely avoided with a correct binary format.

Just a anecdotal example here on the computational aspect. Maybe 10 years ago or so, I used an OBJ loading library that would straight up decode 3d models represented as ASCII text. I wrote a simple de/serialization program, just 70 lines of code, to both dump and read dumped data. The load speed for a complicated scene went from 38 to 0.7 seconds.

This isn't to say there aren't good use cases for text formats. However, it is always a tradeoff for human readable data, at the expense of efficiency (storage and computational). The former is much less so when using a simple compression step. Very often this tradeoff is acceptable.


In my original comment to which you replied I wrote "The best of both worlds is to have a text representation coupled with one-to-one binary encoding;" for the reasons I wrote in that comment. I am aware of all your points on ser/des of text and I fully agree with them.

Also, I was constraining the use case for text to data that is either the input of humans, or generated from those inputs.

For data that is gathered from devices, etc. I don't think a text representation would be of much use.

Also, between a set of parameters entered by a human, and an audio or a video, there are a lot of use cases in between (all of the points in space of N-sided polygon after a human entered 4 or 5 parameters, ...., generation of statistical data from parameters or other inputs, ...). I think, as a rule of thumb, that the less close the data is to the inputs of humans, the less sense it makes to store the textual representation.


> It is never the case that serializing text that represents numbers is the most efficient representation.

This is false.


Can you elaborate? GP's argument is verbose with a lot of explanations and examples that I agree with. What text encoding and how would it be more efficient?


When the text file is a source of data rather than a reflection of the data that you want to operate on, and particularly, when a lot of that data has been introduced by humans (or based on human input) in some suitable units, it very often happens that many numbers that are generate are short, shorter than a regular double.

I know it is difficult to believe, so, don't; I was not trying in my comment to say that text is better everywhere always; of course not; I myself let users have text as a reference while creating a one-to-one representation in a well known binary format for speed of access. When you really have huge amounts of data, each use of the data usually requires filtering/selection/manipulation/new memory layout/etc. that makes the original optimizations in the binary format useless and complicated to maintain, and more difficult to allow for low friction use by anyone even 10 years in the future. I consider the textual representation as the source of truth; but the binary allows me to do the filtering faster, although in many occasions it takes more space. Yes, one could choose better encodings for the binary data (the 12-bit referenced above, etc.) but the complexity of the code increases and it becomes even more obscure to the humans that have to use (sometime debug) on that data.

I understand is difficult to believe. Well, do not believe. Probably my experience is very niche.


Seems we then spoke past each other. I did not focus on practical aspects or other such tradeoffs. Just, what is the most efficient way to represent numbers. If your argument is that for many practical applications, text is good enough. Then, I don't disagree.

But, if the argument was that text is a more efficient way to store numerical data. Then, there is no need to bring beliefs into this discussion. It is just not the case. Serialisation of numbers have been solved by engineers many decades ago, and there are many to chose from based on extra information on what the numbers should represent. One such representation is actually useful as a lookup table for text. But, to suggest that breaking up a single numerical value into individual lookup table values, is somehow more efficient than storing the original value with the right choice of encoding it... I don't know. I don't care all that much. But, surely you can see my argument?

The whole thing is a tautology. For the sake of silliness, if ascii encoding numbers somehow was the most efficient way of encoding numbers (it isn't, but, let's say it was) then it would only then be exactly as efficient as "choosing the best way to serialize a number". But, surely, whatevever that data was, would then be most efficiently represented by the ascii representation. So, we should store the ascii representation of the bytes that store the lookup table values, as encoded ascii values. Doesn't take more than a handful iterations before you run out of storage space available on the planet.


I agree with everything; for users that are experts in their domain (not computer science), if you don't use text in the frontend that they can parse with little (but very inefficient) code in their scripting language of choice, then, they will be reinventing some even more difficult/complex/yet-another way to handle text that you will have to support in the backend for performance - and in the backend I choose binary formats (although my experience, without focusing too much in any other aspects of performance than retrieval, because most uses with large amounts of data require specific filtering/reordering/new memory layout for efficient algorithms, etc.). And, counterintuitively, when you do that, and you have hundreds of files which are hundreds of megabytes where some columns are integers, and you see the size of your binary data you wonder... why is my binary data few times larger?...


That is a good point. It is a good observation that just binary encoding numbers does not mean you have an efficient serialized representation. And, that sometimes that can be so bad that even encoding numbers as serialized text beats it out.

I think what threw me off was the context of this discussion, which was 3MF file format. For such cases, the usefulness of humans opening up the data in text editors is somewhat contrived. I believe that tradeoff for the 3MF format was a mistake, and will likely hinder broad adoption. I'm however mostly concerned with the extra processing required in order to decode the human readable files. We're talking a likely speedup of a factor of 50-100. For models that push this limit, imagine a file that takes 60 seconds to process. This isn't entirely unreasonable for a model with insane level of details. If they instead had gone for a binary file representation, the load time would likely be less than a second. Failing to recognize this tradeoff for human readable text... is unfortunate.


Hmm, I think I see your point. So, for example, if most numbers were something like 0.1, 0.25, 3.25 and the like, they would take 4 bytes in most cases, whereas storing doubles would always take 8 bytes.

We don't want to take float32 because of the inherently bad precision of working with them, so storing as text is better.

Thanks for the explanation.


If you’re reading or writing a file format, whether it’s text, floating point or some other encoding, you don’t have to store the data in memory the same way when you work with it.

…which frees you to write your data to disk however you want.

And given that freedom to choose any serialization format, you definitely should not manually be figuring out every byte you write — you probably write an abstraction, so wherever complexity there is should be completely transparent because you unit tested the hell out of your sterilization routines.

…which to me makes the whole complexity point moot, and which also makes the whole “text is better for memory minimization” argument also moot.

I do think text formats have a place — primarily readability and interoperability — but to choose it for purely reducing your file sizes sounds insane to me.


There is a good reason for why one would want to store data the same way as when you work on it. It removes the need to process the data itself. A common approach is to follow it up with a fast compression algorithm when storage space is important.

The difference in post-processing when having to decode text is around two orders of magnitude, which is very significant. In the case of the 3MF format in question in this article, I find the tradeoff for human readable files to be a very strange one. The benefit of that at the cost of say waiting a minute to load a file that would otherwise take a second, is a puzzling one. But, it's all a tradeoff after all. If humans work with the files, it is a strong argument work with text. When software stores and loads files, it will likely be a poor choice to store data as text.

But, you are right. The file size consideration is not all that relevant. However, what was mostly discussed was the argument that serializing text that represents numbers is more space efficient. Which as I mentioned a few times, can only be the case when the straight serialization of the numbers is done poorly.


of course... I never thought that text files to reduce file sizes is sane... Maybe because I don't want to write in a wishy-washy way as if I was stepping into a mine field I came across like that [look at my first comment... I think I started as "I know it is counterintuitive" - I wanted to point point out the binary (as naturally interpreted: storing floats/doubles) does not necessarily mean smaller file sizes for files generated by humans.


I too am curious. Not that it really matters. The statement itself is a tautology. There is no "opinion" here. Just straight logic and math. Text serialization is a way to serialize numbers. It's just a very poor and roundabout way made for human readable text. The idea that such a roundabout way of storing numbers as other numbers should somehow make things more efficient than storing the actual numbers... Is kinda silly.


The computation burden on these isn't so stiff as to force optimization, just FYI. Even high count models load and process on a reasonable machine in a reasonable amount of time.


> much more space than storing the text

In other cases it’s the opposite. Convert a model from millimeters to inches, and that 1.0/25.4 multiplier gonna cause decimal representation to be much longer than 8 digits. Mechanical engineers are doing things like that pretty often.

> when humans enter data in some specific units

I don’t think it’s a reasonable expectation for a human to type 100 MB, or 1GB of numbers. People do work with triangle meshes that large. These models were made by specialized hardware or software tools, not by entering numbers. It’s the same story with images: they have too many megapixels now, and people aren’t entering millions of RGB numbers. Instead, people using scanners, digital cameras, and Photoshop to deal with these volumes of data.

> Text compression of number data is extremely efficient

The most expensive part is not compression, it’s printing and parsing. That code is very hard to vectorize or parallelize.

When software reads or writes large continuous buffer of binary data like std::vector in C++, the throughput is often limited by physical disk. On many modern computers that’s at least 100 MBytes/sec, often much faster.

I think it’s borderline impossible to print or parse float numbers that fast. Especially given that in 3MF, there’s some XML producer/parser down the stack who needs to escape/unescape these strings.


I understand all your points (I have implemented all sort of parsers, even in mmaped memory regions, jumped around text parsers to not look for '\0' or do in-place changes, etc...) and I agree with them.

I think text is a good format for the "truth"; a binary one-to-one representation of that text to avoid floats parsing is the best alternative. Counterintuitive as it is, many times, the text representation takes less space (sometimes there are huge lot of little round values, for examples 0s or 1s or 2s, in some columns).


This is an issue that shows up often. ASCII is used because the performance cost is less than the benefit of guaranteeing that the exact value retrieved is the exact value stored.

I often wonder why BCD file formats are not used.


in the inches/mm case... my choice would be, if we can support units, then write the file data as given by the user - use that data as the ultimate source of truth; use a more efficient/standard representation in the backend and have processes that test that you can re-generate the file from the user as accurately as possible with the backend data. But of course, it depends on the exact case and how much users expect to edit by hand their files.


This is best practice, IMHO. Consider the input data immutable and work from there.


The problem there is using inches.


While many agree, and I for sure do, fact is manufacturing in a lot of the US is a mixed unit scenario at best.

Much of Additive uses SI units, fortunately. One of the bigger players remains with Imperial units, unfortunately

I do sales, service, support in this industry. One of the more hilarious queries I got recently was, "Does your machine support freedom units?"

We all had a nice chuckle, and yes that shop works mixed and it's a pain point for them being a mix of older school and new school tech and people all over the map. They don't see a transition in the near future either. Too much of the work remains Imperial as does equipment that has a long service life ahead.

The real problem lies in formats and software that makes assumptions about units, and or makes it all difficult to manage, or input / extract mixed unit data.

I see meters used fairly often too. Not the same non integer conversion, but the other problems do remain.


Maybe creating a standard textual hexadecimal-double format could be a good middle ground. "a.2aE3b" easy to include in text and to parse and maybe easy to read once used to it.

   hex   => dec
   a.8   => 10.5
   f.88  => 15.55
   9.e   => 9.875
   11.8  => 17.5
   a.8E1 => 168  (15.5*16)
   a.8E2 => 2688 (15.5*16^2)


Rather than constraining to floating point, why not just use a 4-bit BCD format with a zeros place symbol and terminator symbol? That way you can express exact values efficiently. Of course it becomes unwieldy when specifying extreme values (such as 10^100), but for most reasonable values (10^20 to 10^-20) it beats both floating point and ASCII for size and speed.


I acknowledge that in a correctly designed binary format given data may in some cases be more compact than when represented as compressed XML.

I ask myself though will it practically matter in the 3MF vs STL case, given 3MF uses a different approach to represent geometries that produces both more accurate and far more compact results than STL for a similarly high precision[0]?

[0] https://hawkridgesys.com/blog/3mf-the-file-format-for-the-fu...


You could store decimals (either power 10 or power 2) with a given precision. Essentially, when storing floats as text, you are already doing that.


The binary stl format uses float precision. But the point is much more that parsing takes CPU cycles.


Reading those numbers is only a minor part compared to processing the data in the slicer. While you are right, that a better format could be imagined I would call it a minor or even non-issue. After compressing the file sizes should not be so much different as the raw data suggests.

> Text formats are good when human readable, but no sane person gonna read 1GB of data.

People are regularly editing the resulting G-Code file which is even bigger in size. They use old fashioned text editors for this which works because it's not binary but simple text with readable numbers. Then those huge files are processed with little 8-Bit computers which have barely enough processing power to move the motors.


All true!

Some common edits are:

Change in process parameters mid program, speed / feed rate tweaks to optimize production jobs, tool number change to run job on different machine / tool setup.

Big G-code files are my favorite text editor test.


I think 3MF has some real practical benefits as outlined in the article. It may take a lot of file space, but that can be solved in a new version, specified in the header files, in the same way it was done with STL.

As with any format, the tooling needs to develop, and then C++ is not a bad idea to write a library in I think.

I guess there's space for both file formats tbh. If you are sure your STL is correct, and are sure about the dimensioning, go for it. If you want to be a little safer, go for 3MF, with the drawback of a little larger filesize.


> that can be solved in a new version

Yeah, if in a next version they will move vertex/index buffer data from XML into binary files in the same ZIP (3MF are ZIP files), will be better already. Printing and parsing numbers is surprisingly expensive when you have a gigabyte of them.

> C++ is not a bad idea to write a library

C++ is fine by itself. Still, should the format be simpler, the official library would be optional, as people would be able to implement support in any language without FFI. It’s hard to do in practice, which leaves the official library as the only practical option, IMO. At least that’s what I did when a client asked me to support that format.


I don't get why a new polygon data format is needed at all. Sculpted models are always on polygons, and that's good, but almost as much bulk of 3D models are CAD models "rasterized" into triangle polygons and that's just pain. Aside from patents and/or rights reasons, why can't we just pass around CAD data to feed CNC machines?


Some parts of the 3MF are actually good. I like the spec documents winding order of the meshes. The format includes units, and these optional 4x4 transformation matrices — both are useful.

As for need of the new formats, for one, modern CAD formats are insanely complex. These IGES/STEP/BREP files require many megabytes of very complicated code to deal with, such as this library https://github.com/tpaviot/oce These formats may even contain proprietary extensions. Also, they need non-trivial processing power to handle. Many people wouldn’t want a Core i7 with gigabytes of RAM in their 3D printers, inflates hardware cost and software complexity.

Besides, we now have high-resolution 3D scanners, and CAE software which algorithmically optimizes models by running numerical simulations. They both output triangle meshes instead of CAD files. Scanners often output point clouds one can convert into triangles, but hard to convert into CAD formats.

I just don’t like the 3MF implementation too much. XML is fine for kilobytes of data, but not many megabytes. If I would be designing that format, I would probably made it binary. Maybe EBML https://en.wikipedia.org/wiki/Extensible_Binary_Meta_Languag... would work for that; it does fine for MKV videos, which is also a huge pile of structured data with non-trivial performance constraints for producers and especially parsers.

Another minor thing, it was not the best idea to make name start with a digit. Most programming languages forbid identifiers like that for their classes / functions / namespaces / modules.


There are a few goals for 3MF. One is to have a common format that is portable across tools made by the big companies participating in the consortium. Another is support for specifying material in addition to shape. STL can represent a part, but can’t distinguish it as a specific metal or plastic. STL also can’t specify color. Those properties can be used in engineering software to simplify analysis and visualization. 3MF also supports multiple parts, which is useful in a multi-material build. For example, when a dentist 3D prints a mouth model they might use white for the teeth, red for the jaws, and a removable support material. With STL each of those sub-models would be a separate file, meaning the 3D printer operator would need to properly position them and choose appropriate materials at build time.


I've not seen it used too much, but STL does represent multiple parts.


> why can't we just pass around CAD data to feed CNC machines?

CAD data for products like Fusion360 or SolidWorks depend on a geometric modeling kernel, which is a complex piece of software. To do anything with that data, you need to run the kernel, and you need to run it the same way the software package does. While there is an open source kernel (OpenCascade), it's not as fully featured and in any case not compatible as the industry leader Parasolid.

To use CAD data in a printer or CNC, you'd possibly need to pay a license for the kernel, but also have the exact version available and therefore dozens of versions for compatibility, you'd have to support and debug running all those complex kernels, handle compatibility upgrades, have enough storage CPU/RAM/disk and so on. And then subtle bugs in different versions/kernels would result in different results. Imagine the support nightmare.


Personally, I much prefer text for manufacturing data. File size isn't a big deal for the intended use cases and that file contains setup information anyone can read and make use of. That's actually worth the format tradeoffs right there.

G-code generated from these files often contains that info, and some additional info specific to the job created. Same benefits of plain text.

Often, I get these files and that information being available, easy to read without having to have specific software is a huge win.

Humans do read these files :D


https://en.wikipedia.org/wiki/3D_Manufacturing_Format#Sample...

It's not the dumbest engineering data format I've seen (seen many, this is at least one you can parse).

But ffs, just use HDF5 people.


> It's not the dumbest engineering data format I've seen

It’s not, but should the models be binary, the format would be much better.

> just use HDF5

Last time I checked (a few years ago), HDF5 was only possible to handle with the official C++ library. That library didn’t even support multithreading, when I tried to write 2 HDF5 files from 2 concurrent thread, the program crashed with broken data structures in some global variables deep inside the HDF5 library.

My computer has 16 hardware threads, the servers I’m targeting often have more of them, and I’d like to actually use that hardware. Single-threaded libraries aren’t OK in my books; not in 2021.


It's a pity, also, some other limitations (for example with strings in HDF5 files, etc.). The format itself is to me, the closest to "the ultimate format", however, yes, library/tools support is a bit disappointing and it is sad. I came here to say that 3MF can be serialized into HDF5 - keeping the advantages of both, the textual description (which is much much more futureproof when one looks 10/15/20 years in the future) while having a one-to-one fast access counterpart.


> which is much much more futureproof when one looks 10/15/20 years in the future

IMO the main reason why binary formats suffered from compatibility issues in the past was huge variations of hardware. We had little endian/big endian CPUs, 16/32/64 bit bitness, earlier in the past there were even weirder computers with non-two complement signed integers, or 12 bits in a memory word.

By now we only have big endian CPUs. Vast majority of them are 64 bits. All of them are using IEEE floats, at least for FP64 and FP32 numbers; for FP16 there’re several versions, unfortunately. None of that is likely to change in 10 or 20 years.

For applications where performance matters, like image and video codecs, we do have binary formats which are older than 20 years, still in wide use, with little to no compatibility issues. Examples are PNG from 1997, or Mpeg-4 from 1998.


*little endian


there are still many HPC big-endian systems running... Maybe in 20 years everything will be little-endian.


Indeed. Thanks for the correction.


HDF5 supports parallel I/O with MPI, which works fine for HPC applications, but I can see how this might be a reason against HDF5.


Unfortunately HDF5 failed as a universal format. I really enjoy it, but the support in various languages is extremely bad so you end up having to write your own adapters or using only the available subset in your poorly maintained language libraries.


HDF5 is great for many things but probably not as an application file format. In my experience it’s really hard to ensure you don’t write an invalid/corrupt file during exception handling etc. It’s great for data archival and batch processing workloads though.


We have a few million unique users, which is small within the broader context of 3D creators and consumers - so small N, but I'll share a couple observations at that volume.

Adoption

STLs remain the most popular, today, on our platform. 3MF has been growing, but only in the past quarter did our users tell us in meaningful volumes that we need to support it - which we did, happily. The push to 3MF seems to be gaining momentum.

Expected benefits vs current state (again, small N and biases in my data)

1. Color/texture is a benefit, though creators include it to varying degrees based on their use case and the intended consumption. We support printers, general 3D designers/sculptors, and mechanical engineers - so our audience is broader than primarily/solely 3D printing. As such, adoption varies considerably depending on the user and use case.

2. Often, the model has more metadata (though rarely includes color, photorealistic renders, etc) and a smaller storage footprint. Benefits creators, consumer and platforms. Though again, adoption lags on the more advanced capabilities inherent to the format.

3. Support for assembly like models is a huge benefit! However, the average 3D printing oriented creator still prefers collections of STLs. In user interviews, I often hear the rationale for this is that there are (perceived) tooling and general workflow changes that seem time consuming - so the usual.. change/learning takes time.

Timing

Creators will drive adoption, as if often the case. We are seeing this increase every month on our platform (thangs.com), but the relative share of uploads still skews towards STL and other formats.

Platforms will need to adapt and take advantage of the format benefits to help consumers see the value.

Over the mid term, I am really excited about 3MF. The fact that there is a spirited discussion on HN about it, if nothing else, strikes me as a great sign!


>Support for assembly like models is a huge benefit! However, the average 3D printing oriented creator still prefers collections of STLs. In user interviews, I often hear the rationale for this is that there are (perceived) tooling and general workflow changes that seem time consuming - so the usual.. change/learning takes time.

Yes! STL actually does this, and I've employed it for years, but often find software isn't expecting / designed for it, and or others are not aware and use the bundle of files workflow and it's easiest to just go with that flow.

Model metadata is exciting to me, and it's for reasons similar to those found in other manufacturing contexts. JT is a great example, and in the 90's VRML actually got used for model + metadata representation by SDRC / FORD to communicate tolerances and other data along with the geometry.

Things are very slow to change. Decades for some of this stuff. We were capable of real paperless in the 90's, and still... Not there, but more there than not these days.


Well said. What I suspect we are seeing is the beginning of a trend where the rate of change begins to accelerate.

If nothing else, the growth in new 3D creators (be they mechanical, design/sculpt, AR/VR/MR/XR, printing/additive) will bring in new individuals less beholden to the way things have been done in the past. This is happening today.

If we look at those with some of the highest 'change cost', we can look to CAD and PLM within manufacturing. Most of the tooling in that category (design clients through to PLM) hasn't even come close to keeping pace with modern conventions. I'm as excited as you, even within that relatively small, change averse context.


And, where that tooling does provide for modern conventions, it often goes unused. Same inertia in play that we have been discussing.

I have only a quibble or two with 3mf.

Curved triangle representation is one, and lack of source topology is the other.

At some point, we need to generate real curves, and frankly I would be pleased with second degree curves. (The analytic ones, arc, comic, hyperbole, parabola)

When describing small features, and when coupled with machine process requirements, chordial deviation becomes a real issue. We either end up overloading path planner subsystem with a crazy number of too small line segments, or accept a cutoff of 16 ish and deal with undersized radii.

As far as I understand, we do not yet have those features in the output file specs just yet, and it sure would be nice if we did.

Frankly, arc and potentially second degree curves in general, fitting makes some sense at this point just to make use of firmware that supports it.

Software I am involved with has headed down this road to both resolve small features correctly so that they are printed correctly, and to enable feature discrimination so that tool paths can be generated with higher order formulas intended to improve on both machine performance and part quality, consistency.


Watching the new types of creators is super interesting!

I am fluent and trained many groups on high end tools, like NX. Sadly, I made a ton showing people how to do higher order things with the base toolset. (Almost nothing one cannot build to high fidelity these days)

I feel there is a lot of false value out there in those tools too. We would all benefit from the base toolset being extended and deepened some so broader adoption is on an incentive path, not so discouraged for license cost reasons as it is today.

And I get why. The hit in revenue would be profound and the skills needed to really use those tools remains a barrier, meaning there is no realistic volume strategy.

Still...

In my current role, I get the opportunity to use a lot of new tools. Have almost no dependence on the traditional CAD/PLM stacks!

I can use that stuff, and will at times because it makes the most sense.

But, in the additive sense, one can create with Python, or OpenCAD, Blender (!?!) Etc.

Voxels are becoming a thing, as are various mesh tools that are nor chained back to mega geo kernels...

I suspect as these people mature into the manufacturing and design scene, we will see another wave of changes on par with what NURBS and solid models did compared to wireframe.

And that clash? (With the traditional geometry stacks)

Might be epic!

In a good way.

One particularly interesting aspect, or potential I should say, is the non NURBS ways can be employed both in concurrent and parallel fashion!

Something like Parasolid can be employed in concurrent fashion, and that depends on model history being present or not and how it is shaped. We see that today when users of the high end tools really put the features to use.

However, there are limits and a a largely sequential compute path requirement on any given branch.

When one sets that aside to favor other ways of representing models, parallelism enters the game and the possibilities are very well aligned with additive.

One last ramble, because I live this stuff: Hybrid systems may well be able to incorporate both,,making the big tools bigger and that may be one outcome from that clash I mentioned above.

However it all goes, and I may well have it all wrong, we are headed into new and very interesting times.


Couldn't agree more with your points, and I agree that this clash is going to happen.. and that will be a great thing if it does!

If you are interested in connecting - I would be happy to trade observations over a video meeting and coffee/beer. d@physna.com

Disclaimer: I don't know if we are allowed to post this on HN, off the official posts, but I'm also hiring. We do 3D search, are growing rapidly, and have some excellent investors: Sequoia, Google Ventures, Tiger, etc.


Looks like a pretty cool project you got going there. I wish you luck, and success! I always thought that was a high-value space, but many of the tools are just thick. Expensive kind of painful.

I'm not looking at new opps right now, (have a startup of my own ramping up) but we may find a chat worth it. I'll try and connect with you in the near future. :D


> 3MF file format and why it’s great (2019)

>3MF, also called 3D Manufacturing Format is an open-source project developed by the 3MF consortium founded by Microsoft.

Everything Microsoft does is great. I wonder if i have to delete some registry keys to use it just like for my dvd drive /s


STL works everywhere. Is trivial to parse. And can contain both colors and units [0].

Is there anything stopping us from having multiple solids per file? If not, I don't see the reason for another format.

The mentioned benefit of having slicing settings in the file will not work. Slicing settings are not portable between machines. And not portable between different kinds of filament.

Can someone post the XKCD about additional standards? :)

[0]: https://en.m.wikipedia.org/wiki/STL_(file_format)


> Is there anything stopping us from having multiple solids per file? If not, I don't see the reason for another format.

I think TFA offered a pretty compelling argumentation why you should consider using 3mf for 3D printing and I think you glossed over it:

> 3MF provides a clear definition of manifoldness — it’s impossible to create a 3MF file with non-manifold edges, and there is no ambiguity for models with self-intersections.

Even this is enough of a reason for me to prefer using a 3mf if available instead of having to fix holes in a godawful mesh editor. "STL works everywhere" is true only if you consider incidentally non-manifold STLs as an issue with the software that produced them and not the format itself.

I would like to add another technical detail that I don't think is included in the article -- 3MF uses curved triangular tessellations to encode geometry. This means more accurate representations of geometries and smaller file sizes even with high detail.


I would consider that an issue with the software. Non-manifold models, or self-intersecting models, are not suitable for 3D printing. This is an issue with the model rather than the file format.

If 3MF somehow makes self-intersection, and non-manifoldness, impossible cannot we run STL files though the same algorithm to end up with a "fixed" mesh?

What happens if you try to save a non-manifold model as 3MF? Will it magically fix the issue or will it fail to generate the file?


...yeah I have to agree... xml is a bit trash, STL works and I don't see how the file format itself can preclude invalid structures, and as others have pointed out the whole text versus binary means a massive slowdown and inefficiency...

The algorithms and the vector compressions could likely be salveaged, but the whole positioning as a 'replacement' to STL seems like red herring marketing, with lock-in to whatever stdlibs they are providing.

It would have been better as a standard binary format, there are a number of choices...



I'd like to know the backstory of the .amf format mentioned in the post.


AMF is an ISO standard, which means the full specification can only be bought from the ISO store itself.

I'm not sure about the ambiguity mentioned in the blogpost, as ISO standards are usually rather thorough. I suppose it's because most of software implementing AMF did so without the full specification at hand and assumed some things.

For me STL/AMF and 3MF are complimentary. When I design a piece in FreeCAD and want to export it, AMF or STL is good enough as FreeCAD doesn't know anything about the printer I'll be using or the settings I want. But once in PrusaSlicer it makes sense to use 3MF to keep everything tight, for example if I need to move an already configured file from home to the office.


> STL is good enough as FreeCAD doesn't know anything about the printer I'll be using or the settings I want

This is exactly how I feel about all this. Logically, 3MF is superior to STL, but emotionally the STL feels like it is about "the model" whereas 3MF feels like it is about "the print settings, with an STL model happening to be in there too".


Designing a new xml zip based file format is like going back to 1990...


.... 1999. At least, that's when I remember learning about XML.

Wikipedia adds that XML 1.0 didn't come out until 1998 and started in 1996.


Well said. In some respects (not all), on our teams that manage geometric indexing, we've been comparing this to Office document formats a long time back.


there was no xml in the 90s.


A new text based 3D format in XML... in 2021?

> Microsoft

Oh that makes more sense.


> in 2021

This post is (should be) `(2019)`, and 3MF first released in 2015 according to Wikipedia.


Thanks. 2019 and 2015 aren't much better though. There's been a bit push for gltf but it's quite complex; I don't see it being adopted. XML isn't the answer either, though.


> A new text based 3D format in XML... in 2021?

Well, thank god they are using a standard, interoperable, widely understood and supported text-based format like XML.

Any other solution would have been terrible, especially if yaml or json were involved.


Why would JSON have been terrible? Because it's lacking namespaces and all those "quirky" things most XML implementations (and users) get wrong?


Using JSON for exchanging long-lived documents is bad.


Can you elaborate?


You took the bait, mate.


Nah, I think he's serious. At least by other comments in similar regard ...

> JSON is a verbose mess.

Or did I get baited again? Calling JSON a verbose mess while comparing it to XML is pretty weird ... argh! What is real and what isn't anymore? Do we even exist?


I suspect he's a wumao for the JSON standards body.


I don't know of a single model format in widespread use that uses YAML, and only one that uses JSON (Three.js object files).


strictly better than yaml.


These files are pretty much an OOXML file, same structure and even has a [Content_Types].xml


The grammarian in me has to cringe at chopping an initialism in half. It should really be 3DMF.


Maybe, but 3DMF is a 90s QuickTime/QuickDraw3d file format from Apple. Since this is /also/ a model format, you wouldn't want that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: