Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A research project might be collecting sensitive personal information.

A charity might have a public but internal management system for staff or related organisations to organise on.

In either case, being open source increases security risk.



> In either case, being open source increases security risk.

This is blatantly false. Any claim that closed source is provides any form of security is entirely a claim in security by obscurity.

If open sourcing your code presents any risk to sensitive personal information, then that means that you are already grossly mishandling this information. Whether or not your open source your code at this point doesn't matter—the harm is already done.


> If open sourcing your code presents any risk to sensitive personal information, then that means that you are already grossly mishandling this information

This is also clearly false.

For example, take this scenario:

- You use web framework Omega, but minimise indicators of this (suppress HTTP headers, etc).

- At 2am, a critical security vulnerability is discovered for Omega and a patch is released shortly after.

- Malicious actors scrape GitHub to find sites that use Omega, and try compromise them.

- At 9am, you apply this patch.

If your project is open source, there is a 7 hour window where you are clearly and publicly broadcasting that you are vulnerable.

If your project is not, there is a 7 hour window where you are vulnerable, but this is not easily apparent to attackers.

How would you prevent this risk?


It doesn't work that way. Attackers don't check if you are using "Omega", they check if you are vulnerable. There is simply no difference if you are hiding framework indicators here.

Well - unless there is a targeted attack _against you_. In this case the attacker will search for known vulnerabilities in Omega and maybe even try to come up with some new ones. Having source helps the attackers here, but then again, it has helped researchers fix the vulnerabilities too. So it's a mixed blessing.


This doesn't matter at all.

Attackers either flood you with every attack under the sun, or tear your site apart and will know exactly how it works.

Imagining that you can hide the function of your site is again security by obscurity.

The key idea here (I forgot the name of the law, but others' mentioned it in the tread) is that regardless of what you do, the adversary will end with complete understanding of how your system works.

Therefore, any security based entirely on the adversary not learning about implementation details is entirely defective.

Furthermore, an attack exists for days, months or even years before fixed, it takes time to fix and release, and it takes time for you to discover the advisory and deploy.

You were not vulnerable for 7 hours. You were vulnerable for weeks, months or years.


> A research project might be collecting sensitive personal information.

The data being processed (personal info) has nothing to do with the source code. You can release the code while keeping the data private.


The source code will indicate where/how the data is input, processed and stored. It might help an attacker compromise the application in any number of ways.

There's non-trivial risk there, enough to make it an ethical concern.

So, in order to use AGPL software, you have to open source your entire source code, which means you have to go through a long and arduous risk assessment which will likely decide you can't.


You only have to open source the AGPL'ed code if it's providing a networked service.

Many academics and charities don't provide services, so it doesn't affect them.

When you write "enough to make it an ethical concern", is that a hypothetical concern of your own making?

Many academics must go through institutional review boards or other ethics committees.

Many academics also develop and distribute free software for analyzing sensitive data where IRB oversight is required.

If what you are saying is a real concern, then I expect it would have been brought up long ago.

Can you point to examples?

I believe your argument is equivalent to those saying that Linux-based free OSes cannot be used for secure platforms because the source code is available, so anyone can potentially break in.

So why is it that many people doing research which requires IRB oversight use Linux-based OSes?

I agree with tokai - you're arguing for security-by-obscurity, and there's no evidence that that increases security.

I think the evidence shows that the ethical concerns you suggest don't actually exist.


Security through obscurity is not security at all.

https://en.wikipedia.org/wiki/Kerckhoffs's_principle


I’ve always felt this argument breaks down with smaller scale targets. I’d argue security through obscurity is not security, but there can be safety in obscurity.

There are a massive number of systems that are completely bespoke for small organizations or even individuals, and their user base isn’t going to grow.

What’s more, these systems are extremely liable to rot- the contract developer writes the system and moves on. That means library versions in the repo aren’t going to get updated when new vulnerabilities are found. So now this random 1 GitHub Star system is siting unpatched out for anyone to see.

Now what might have been a hard to find but exploitable issue risks getting a black hat spotlight shown in it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: