It feels like the constraints/expectations we face building in 2017 are different from those of 2004 (when Rails was created) or 2005 (when Django was created). Both of these predate the iphone (launched in 2007) and the dominance of mobile, as well as stuff like Websockets (2011), 12 factor, immutable infrastructure, distributed systems...
It feels extremely unlikely that that if we decided to design a framework with these things in mind we would end up with something that looks like Rails or Django (or some other RESTish/MVC thing), and yet somehow these are the things we overwhelmingly use and recommend.
I'm a full-stack Javascript/Ruby dev and linux geek looking for a place where stuff I write makes an impact: on users, on the organization.
I built https://www.usesth.is. It's end to end Javascript: React/React-Router/Express/GraphQL/Node.js/ArangoDB(V8 in the database!).
Wikipedia has lots of disambiguation pages but somehow this idea has never made it into the search world.
Perhaps the idea of a single text box that you can type "Michelangelo" into is not a good one. Tracking the user so you can get some context (is it Ninja Turtles or art history usually with this person?) seems a logical extension of the lunacy of that situation.
I use DDG a fair bit but I feel like without revisiting that assumption that a single context-free text box is even desirable, ditching the tracking (which I am totally in favour of) feels like they are dooming themselves.
I've played a little with running Yacy locally and directing it to crawl only sites I care about. So far that habit has not stuck.
The bangs are a step in the right direction.
Suggesting additional search terms isn't quite right, and neither is doing a site specific search since I don't know what site will have the information.
Maybe a "metabang" where you search all the bangs in a category? "python !!tech"
Bill Blunden: "In this editorial he asserts that American spies are motivated primarily by the desire to thwart terrorist plots."
Glen Greenwald: "In one sense, this blame-shifting tactic is understandable. After all, the CIA, the NSA and similar agencies receive billions of dollars annually from Congress and have been vested by their Senate overseers with virtually unlimited spying power. They have one paramount mission: find and stop people who are plotting terrorist attacks. When they fail, of course they are desperate to blame others."
Maybe Glen's sentence could have been written "They have been given the mandate to find and stop people who are plotting..." to fend off a potential (purposeful?) misreading like Mr. Blunden has done, but the problem here is clearly not the wording.
Nothing about Glen's article supports any of Mr. Blunden's rambling innuendo. Not only is this is a lame attempt to score points off of Glen, but it's pretty dubious submission to HN in the first place.
I think FSF could learn a lot from what Micah Lee is doing at the Intercept. He's been doing a bunch of articles that are a nice blend of why and how with a nice conversational feel.
In terms of outreach and informing new generations of users... I think adopting that style would be a big win. Even non-technical users have a multi-year investment in Windows, and in spite of all the polish of modern distros, the jump to FOSS is still a big one. Help people make it.
Because a blunt, rude, & brutal response to the observation that the community around the kernel is blunt, rude, & brutal is obviously the way to go...
That would be cool. I'm not sure it would change much though. I know someone working with search data who recently tried out Neo4j with a test data set of 500,000,000 nodes and apparently was really disappointed with the results.
I'm not sure that graph data (generally) is particularly amenable to being spread across multiple nodes. My understanding is that ArangoDB has implemented some clustering based on Googles Pregel Framework, so I suspect it might fare a bit better in my friends test... but in spite of my urging I don't know that he has had time to recreate the test with Arango. I'm keeping my fingers crossed.
I don't know if any database is fun to deal with at that size. My experience with Arango has been an unremarkable amount of remarkably complex data, so I would also be interested to see the results with something huge.
I'd love to hear from your friend and his experience with Neo4j, to see how we can make it easier / better to configure it correctly for the data volume.
I think the idea with "native multi-model" is that Arango was explicitly designed to do k/v, documents and graphs rather than it being something that is bolted on afterwards.
I've been using ArangoDB for a year now and I think they are definitely on to something cool.
Having stumbled upon some really complex data a few times now, I am increasingly appreciating how amazing it is to model your data any way you need, without having to deal with the complexity of running multiple data stores.
Cool to see that I apparently didn't give up any performance to get the flexibility. :)
I'd love to see them push the geospatial capabilities a little further, but they are already pretty decent.
Seems like overkill. :P