"Initial support for English, Spanish and Japanese.
It includes:
* Sentiment Analysis: Understand the overall sentiment of a block of text
* Entity Recognition: Identify the most relevant entities for a block of text and label them with types such as person, organization, location, events, products and media
* Syntax Analysis: Identify parts of speech and create dependency parse trees for each sentence to reveal the structure and meaning of text"
The amount of applications is enormous. 💪
This is exactly what I've been looking for! Right now, LUIS and Wit.ai mostly require you to route your entire chat stream through them, making it difficult to do hybrid approaches where you can clean up chat structure via pattern matching and then try and decipher using NLP.
An alternative approach is to use Stanford's CoreNLP for this, but the sheer size of it makes it impractical for some cloud deployments like Heroku so a REST API is nicer. I'm curious to see how their NER compares to the Stanford one--the auto-retrieval of wikipedia URLs is a nice touch.
Yup, seems to work(ish) - tried a couple of times and 'Product Hunt' came up as an organisation before, not this time though...
You can read the launch blog post here
Would love to hear from @apoorvsaxena1 on this !
@bentossell Happy belated birthday! This is tough one. The model determines entity type based on how and where entity occurs in relationship to others in the sentence. For example, changing the preposition "in" (in the Startbucks) with "at" changes the entity type to person - because you could be working for a person (manager) at the Starbucks.
@glowingrec Sentences with low magnitude should be interpreted as mixed sentiment. You need to look at the combination of polarity and magnitude to determine sentiment.
Very cool! I used this https://dandelion.eu/semantic-te... but this new Google service look quite interesting. Glad to see they connected it to Wikipedia resources (I was afraid they could have connected it to their own resource base - ex freebase)
@apoorvsaxena1 As I am curious, what is the process of supporting a new language ? Are there language-specific rules like grammar to implement or is it just filling a kind of dictionary ?
Remotes.in