Hacker Newsnew | past | comments | ask | show | jobs | submit | BitMastro's commentslogin

A year ago I developed an android client for freebase, but unfortunately I never had the chance to finish or publish it.

I guess now it's too late :D

If anybody is interested the apk is here

http://goo.gl/6BxAZJ



best practices sounds a bit pretentious to me, 'how we dev at Futurice' would be less presumptuous. Some of these points are common senses, other are arbitrary library or architecture choices that are not going to fit every project and some just need to be updated ASAP.


Something like this http://www.panoramio.com/map ?


I think that's what the project is using. But I also thought Panoramio required people submit photos to it, it doesn't just pick up every photo it can find online (like Google Images does). I've never uploaded anything to Panoramio but I've taken a ton of pictures.


Yup. You do have to submit photos. It has a selector to import Google+ photos, but it doesn't have all your photos automatically.


You're comparing apples and oranges, Apple had the privilege of choosing its own hardware and programming language.

For android they went for java because of popularity, and everyone can use it on their own hardware.

So you have a java based language running on ARM, x86 and MIPS, what would you choose? A proven approach like JIT VM or AOT?


> You're comparing apples and oranges, Apple had the privilege of choosing its own hardware and programming language.

Google and Apple both used already existing languages and adapted existing kernels. If you're saying that Apple didn't use an existing language or that Google didn't have the option of being more restrictive with hardware, then you are wrong on both counts. Google made choices which turned out to be worse for users over a certain timeframe. They were more attractive to vendors and manufacturers, however.

> For android they went for java because of popularity, and everyone can use it on their own hardware.

Google was optimizing for programer popularity, not user experience.

> So you have a java based language running on ARM, x86 and MIPS, what would you choose? A proven approach like JIT VM or AOT?

If you're going to go with what's more proven with constrained resources, then you'd go AOT. If you're trying to optimize for UX and lag, but high level enough for rapid development, then AOT with reference counting GC.


I didn't say that, I said that both went for what was the optimal choice at the time.

Why Apple didn't go for Swift from the start? Because Objective-C was used in NeXT and then OSX and so on.. when the time was right ans Swift was mature it has been introduced.

The same with JIT VM, that, again, are then norm for Java bytecode.

Also Google could not be restrictive on the hardware, since they are not manufacturers and the OS is open source. They restrict to ARM CPUs only? Fine, Intel forks it and ports to x86 anyway.

Smart guys work in both companies and they evaluate pro and cons of every approach anyway.


Re: "For android they went for java because of popularity, and everyone can use it on their own hardware."

True, but only the hardware independence part is true. Android was targeted for any vendor, while Apple targeted only their A-series processor.


I'll give it a try:

privilege separation and permission since the beginning

only super user was allowed to install new software

(simplifying) different distros and different versions created diversity making it difficult for an attack area to be widespread across all installations

typing a password for additional privileges requires more attention than clicking a button

apparmor has been enabled by default since a couple of years, it used to break some stuff but not anymore

(simplifying) new files are not executable, and they don't rely on extensions to determine the associated program

since linux is not the default it requires a learning curve that people using windows don't have, so users are more tech savvy

since the source code is available, more people COULD have a look at security vulnerabilities, and in case of emergency the don't have to wait for someone else to provide a patched binary

That said, I don't consider security on windows to be a disaster. It certainly is improving and in general they also pay a lot more attention to security.


1) Linux and other unixes were created with the idea of privilege separation and permission baked in. Windows had to add it later, while keeping compatibility

2) Linux has a variety of kernels and libraries versions across its base, making it difficult to exploit it uniformly

3) MS is indeed capable of making secure OSes, I don't deny it, but you should not use Xbox, Windows Phone and RT as examples, since all three of them can ONLY install approved software (less for RT, but it's not used by end users in the same way)

4) MS could have used the same approach used by Linux and Android (and partially OSX): have a central approved and monitored repository of software, but giving the possibility to add external software by jumping a few hoops, i.e. inserting a password, or checking a couple of checkboxes before allowing untrusted installations.

5) To prove the point, Android has malware almost exclusively outside of the google play store. Never heard of someone getting malware by using android, while I know an handful of people getting malware on windows (this is anecdotal experience, but I don't have any other data)

6) The shitstorm was raised because on some Secure Boot implementations it was impossible to disable it


1) IIRC, The Windows NT family had more granular level permissions than Linux. Granted before XP Windows was quite insecure, as I said in my original comment

2) Still we do see a lot of bugs and exploits that affect large swathes of Linux machines.

3) My entire point is that popular OSes that are used by nontechnical users that allow third party installs

4) Even OS X got a lot of burn for sanboxing apps and making third party apps difficult to install. They tried difficult UAC with Vista and it didn't go so well.

There isn't much stopping Linux malware in repos if the Linux desktop gets more popular. http://www.zdnet.com/blog/hardware/how-much-more-malware-is-...

Heck, even kernel.org was rooted and they still haven't revealed what happened. Not to mention other distros which were also compromised at some point.

5) http://www.pcworld.com/article/2099421/report-malwareinfecte...

6) Which ones? (apart from RT ARM machines that were a total flop in the marketplace and are like iPads)


I agree in general with all your point, apart from 4 and 5.

Malware in linux repositories is "practically" impossible. Software is most of the times peer reviewed and patched in different ways by different distros. And if a particular software becomes more popular it also comes under scrutiny by more people that want to change the source to add their own features. All the packages are checksummed and repositories have cryptographic keys to establish authenticity.

Of course bugs and security vulnerabilities exist, but the same applies to other OSes as well. And I do understand that UAC is obnoxious for users, but they didn't care about creating problems for legitimate users with the no-ip case since it was posing danger.

That android report makes two assumption: a very wide definition or malware (also installing java should be considered a malware because toolbar), and the fact that a malware doesn't usually last more than a day before being removed automatically.


no-ip primary use case is certainly not botnets. It's used by dsl users to connect to their home network, or to get an easy to remember address for a vps, or maybe while developing something before getting a proper domain.


I would like to thank you for creating go read and making it open source, it's what I use as RSS reader, using my own free quota of App Engine. I hope I can contribute back some day.

  "30-day trial: This action cost me about 90% of my users. Many were angry and cursed at me on twitter. I agree that it is sad I did not say I was going to charge from the beginning, but I didn't know that I would be paying hundreds of dollars per month either."
Honestly I think the sense of entitlement nowadays is way too high, people use a product that is free and needed weeks or months of your own free time, and then complain about it when you change it.

If you decide to charge for it, you're a greedy bastard, instead if it's free, they say "if you aren't paying, you are the product". Other complain that the product doesn't work, when instead it's a case of PEBCAK. When it's not, it means you're going to say goodbye to a couple of night's sleep or a weekend or too, or maybe it's ONE feature away from being perfect (again).

...sometimes I hate people :(


People get to be mad (within reason) when a free service stops being free without prior warning. Because they too invested time and effort in a product, making it part of their life, only to have that wasted when terms change.

That is not to say, in any way, that you shouldn't charge - but don't expect users of a free application to change to a pay to use model en masse. Unless, of course, you don't have any competitors who do a similar thing for free...


True, but be polite about it: "Sorry, I'm disappointed because my expectations differ from yours. I'll found an alternative. Good luck with your project"


Having to pay a few dollars a month is hardly a trigger for calling something "wasted"... In my own humble opinion.


"Honestly I think the sense of entitlement nowadays is way too high, people use a product that is free and needed weeks or months of your own free time"

How is he going to recoup anything for his time if you use his product in a way he can't charge for?


Sometime we spend time on a project for the good of the community, and it's ok, because sometimes I give back time, sometimes someone else does and so on.

It's also ok to try to get some money out of your time, if it's desired.

Not ok: use someone else's time to gain personal profit, expect someone else to pay for your actions, etc


The transition wasn't handled very gracefully...

I was an early user, and one day I went to read my feeds, only to be greeted with very curt "trial expired" screen.

That was the first indication of any form I'd had the service was going pay-to-play.

I wasn't upset, but I could see how someone would be -- it was just a very abrupt, almost rude, way to communicate the change.

(Caveat: All from memory. Memory is unreliable, so take w/ salt.)


Maybe a solution could be weighting the vote according to the user history: a user leaving a single vote on a single movie shouldn't be as influential as an user that voted on a wider range of movies over time


This kinda happens already, or at least it used to many years ago; you have to be an active user for your vote to count.

The most obvious effect is movies that drop in the rankings 30 days after release; people who just saw a movie give it 10/10 and then stop participating, so 30 days later they are no longer considered 'active' and their vote stops contributing to the overall ranking.


Also the range of their votes.

If someone only gives 10s or 1s to everything, their vote of 10, should probably have less weight, than someone who distributes their votes more evenly.


This is not correct. Many people dont bother to vote or review unless the movie for them is at an extreme goodness or badness, so the hassle is worth it. I ve watched a lot of so-so or simply good movies, but I only have voted 5-6 times and those got either a 10 or a 1.


The solution is recognizing that a 'honest' or 'good' metric is not possible. Thing is, to produce such a list you need to have some function f, which maps the complex opinions of each and every user of the site into an orderable set, usually integers smaller than 5 or 10.


IMDB only counts 'regular voters' in the top 250 movie list.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: