python-twitter
A Python wrapper around the Twitter API.
is there a way to get the date of friendship creation for both my friends and followers in twitter?
especially for python-twitter....
Source: (StackOverflow)
I'm using the python-twitter API
and whatever I pass to the below code as user
, still returns my timeline and not the required user's. Is there something I'm doing wrong?
import twitter
api = twitter.Api(consumer_key=CONSUMER_KEY,
consumer_secret=CONSUMER_SECRET,
access_token_key=ACCESS_KEY,
access_token_secret=ACCESS_SECRET)
user = "@stackfeed"
statuses = api.GetUserTimeline(user)
print [s.text for s in statuses]
When I do not pass the required fields into twitter.Api, it gives me an authentication error.
Source: (StackOverflow)
I am trying to follow some user in a list with python-twitter library. But I am taking "You've already requested to follow username" error for some users. That means I have sent a following request to that user so I cant do that again. So how can I control users, I sent following request. Or is there other way to control it.
for userID in UserIDs:
api.CreateFriendship(userID)
EDIT: I am summerizing: You can follow some users when you want. But some ones dont let it. Firstly you must send friendship request, then he/she might accept it or not. What I want to learn is, how I can list requested users.
Source: (StackOverflow)
I've recently read the book "21 Recipes for Mining Twitter", and there they use the python-twitter - Wrapper by sixohsix.
It seems to me that this library is a bit outdated, since it still has a distinction between the Search API and the REST API. It seems to be made for the API Version 1, but Version 1.1 requires authentication for searching Twitter.
In sixohsix's approach, you establish a Twitter search in the following way:
twitter_search = twitter.Twitter(domain="search.twitter.com")
...
twitter_search.search(q="myquery")
At the same time, a connection to the "regular" REST API needs something like this:
twitter.Twitter(domain='search.twitter.com', api_version='1.1',\
auth=twitter.oauth.OAuth(access_token, access_token_secret,\
consumer_key, consumer_secret))
But I thought in 1.1, search also needs OAuth! Either the Twitter documentation is quite confusing or sixohsix's library is really kinda outdated.
Final question: What Python library should I use to easily and, most important: consistently establish searches and other REST calls to the API 1.1? I saw bear's library, which seems to be more constistent.
But maybe I am totally on the wrong path.. I would like to hear advice from some experienced Python people who interact a lot with Twitter's 1.1 API. Thanks.
EDIT
See #issue 109 on sixohsix's Github - the issue has been fixed and Search API v1.1 is now incorporated in the wrapper
Source: (StackOverflow)
For a research project, I am collecting tweets using Python-Twitter. However, when running our program nonstop on a single computer for a week we manage to collect about only 20 MB of data per week. I am only running this program on one machine so that we do not collect the same tweets twice.
Our program runs a loop that calls getPublicTimeline() every 60 seconds. I tried to improve this by calling getUserTimeline() on some of the users that appeared in the public timeline. However, this consistently got me banned from collecting tweets at all for about half an hour each time. Even without the ban, it seemed that there was very little speed-up by adding this code.
I know about Twitter's "whitelisting" that allows a user to submit more requests per hour. I applied for this about three weeks ago, and have not hear back since, so I am looking for alternatives that will allow our program to collect tweets more efficiently without going over the standard rate limit. Does anyone know of a faster way to collect public tweets from Twitter? We'd like to get about 100 MB per week.
Thanks.
Source: (StackOverflow)
Is there a way to change your twitter password via the python-twitter API or the twitter API in general? I have looked around but can't seem to find this information...
Source: (StackOverflow)
I am using Python Twitter search api from https://github.com/sixohsix/twitter
When I try to search for a query using the "until" parameter, the search does not return anything
from twitter import *
t = Twitter(auth=OAuth(....))
t.search.tweets(q = 'hello', count=3, until='2012-01-01')
{u'search_metadata': {u'count': 3, u'completed_in': 0.007, u'max_id_str': u'9223372036854775807', u'since_id_str': u'0', u'refresh_url': u'?since_id=9223372036854775807&q=hello%20until%3A2012-01-01&include_entities=1', u'since_id': 0, u'query': u'hello+until%3A2012-01-01', u'max_id': 9223372036854775807L}, u'statuses': []}
while when I search without "until", it finds tweets normally. When I try the search manually on twitter.com/search
https://twitter.com/search?q=hello%20until%3A2012-01-01&src=typd
it also finds tweets normally.
Any ideas?
Source: (StackOverflow)
I'm trying this presumably basic thing of importing Python Twitter library.
First I got an error in line 52, saying he couldn't upload a json library. That line of code is part of script where the library decides which json library it should import based on Python's version.
I commented all lines addressing other versions of Python than the one I'm using and it worked.
Then another error popped up:
Traceback (most recent call last):
File "<pyshell#0>", line 1, in <module>
import twitter
File "C:\Python32\lib\twitter.py", line 3448
except urllib2.HTTPError, http_error:
^ SyntaxError: invalid syntax
This one I don't understand.
I tried importing this in versions 2.6, 2.7, 3.1 and 3.2.2, but none worked.
In one I was asked for the oauth library, then oauth2. The first one I got right, the second one couldn't.
I think this should work in 3.2.2. Can anyone help me?
Thanks in advance
Source: (StackOverflow)
I'm very new to twitter api, and was wondering if I use search api, and I want to call it every minute, to retrieve about a 1000 tweets. Will I get duplicate tweets if in case there were created less than a 1000 tweets for a given criteria or I will call it more often than once a minute
I hope my question is clear, just in case if it matters I use python-twitter
library.
and the way I get tweets is :
self.api = twitter.Api(consumer_key, consumer_secret ,access_key, access_secret)
self.api.VerifyCredentials()
self.api.GetSearch(self.hashtag, per_page=100)
Source: (StackOverflow)
I am using python-twitter api and i got consumer_key, consumer_secret, access_token_key, access_token_secret but when i try code below i got this output {} for print api.VerifyCredentials()
and i got none for print status.text
import twitter
api = twitter.Api(consumer_key='**',
consumer_secret='**',
access_token_key='**',
access_token_secret='**')
print api.VerifyCredentials() #returns {}
status = api.PostUpdate('Ilovepython-twitter!')
print status.text #returns none
What do you think is problem here?
Source: (StackOverflow)
I'm trying to programmatically retweet various tweets with Python's python-twitter
library. The code executes without error, but the RT never happens. Here's the code:
from twitter import Twitter, OAuth
# my actual keys are here
OAUTH_TOKEN = ""
OAUTH_SECRET = ""
CONSUMER_KEY = ""
CONSUMER_SECRET = ""
t = Twitter(auth=OAuth(OAUTH_TOKEN, OAUTH_SECRET,
CONSUMER_KEY, CONSUMER_SECRET))
result = t.statuses.retweets._id(_id=444320020122722304)
print(result)
The only output is an empty list. How can I get it to actually RT the tweet?
Source: (StackOverflow)
I'm trying to return a dictionary that aggregates tweets by their nearest state center. I'm iterating over all the tweets, and for each tweet, I'm checking all the states to see which state is the closest.
what would be a better way to do this?
def group_tweets_by_state(tweets):
"""
The keys of the returned dictionary are state names, and the values are
lists of tweets that appear closer to that state center than any other.
tweets -- a sequence of tweet abstract data types """
tweets_by_state = {}
for tweet in tweets:
position = tweet_location(tweet)
min, result_state = 100000, 'CA'
for state in us_states:
if geo_distance(position, find_state_center(us_states[state]))< min:
min = geo_distance(position, find_state_center(us_states[state]))
result_state = state
if result_state not in tweets_by_state:
tweets_by_state[result_state]= []
tweets_by_state[result_state].append(tweet)
else:
tweets_by_state[result_state].append(tweet)
return tweets_by_state
Source: (StackOverflow)
I have an xml like this:
<author ="twitter" lang="english" type="xx" age_misc="xx" url="https://twitter.com/Carmen_RRHH">
<documents count="436">
<document id="106259332342342348513" url="https://twitter.com/Carmen_RRHH/status/106259338234048513"> </document>
<document id="232342342342323423" url="https://twitter.com/Carmen_RRHH/status/106260629999992832"> </document>
<document id="107084815504908291" url="https://twitter.com/Carmen_RRHH/status/107084815504908291"> </document>
<document id="108611036164276224" url="https://twitter.com/Carmen_RRHH/status/108611036164276224"> </document>
<document id="23423423423423" url="https://twitter.com/Carmen_RRHH/status/108611275851956224"> </document>
<document id="109283650823423480806912" url="https://twitter.com/Carmen_RRHH/status/109283650880806912"> </document>
<document id="10951489623423290488320" url="https://twitter.com/Carmen_RRHH/status/109514896290488320"> </document>
<document id="1095159513234234355080704" url="https://twitter.com/Carmen_RRHH/status/109515951355080704"> </document>
<document id="96252622234239511966720" url="https://twitter.com/Carmen_RRHH/status/96252629511966720"> </document>
</documents>
</author>
Is it possible to get the content of this links and place them into a pandas dataframe?, any idea of how to aproach this task?. Thanks in advance.
Source: (StackOverflow)
Consider the following code:
import twitter
api = twitter.Api()
most_recent_status = api.GetUserTimeline('nemesisdesign')[0].text
On my server (nemesisdesign.net) stopped working a few days ago.
If I try the same code from my own machine it works fine.
This is the stack trace:
>>> most_recent_status = api.GetUserTimeline('nemesisdesign')[0].text
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "build/bdist.linux-i686/egg/twitter.py", line 1414, in GetUserTimeline
json = self._FetchUrl(url, parameters=parameters)
File "build/bdist.linux-i686/egg/twitter.py", line 2032, in _FetchUrl
url_data = opener.open(url, encoded_post_data).read()
File "/usr/local/lib/python2.6/urllib2.py", line 397, in open
response = meth(req, response)
File "/usr/local/lib/python2.6/urllib2.py", line 510, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/local/lib/python2.6/urllib2.py", line 435, in error
return self._call_chain(*args)
File "/usr/local/lib/python2.6/urllib2.py", line 369, in _call_chain
result = func(*args)
File "/usr/local/lib/python2.6/urllib2.py", line 518, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
HTTPError: HTTP Error 404: Not Found
Any hint? I have no clue ... :-(
Source: (StackOverflow)
I'm using python-twiter to search for tweets using Twitter's API and I have a problem with chinese terms. Here is a minimal code sample to reproduce the problem:
# -*- coding: utf-8 -*-
import twitter
api = twitter.Api(consumer_key = "...", consumer_secret = "...",
access_token_key = "...", access_token_secret = "...")
api.VerifyCredentials()
print u"您说英语吗"
r = api.GetSearch(term=u"您说英语吗")
I get this error:
您说英语吗
Traceback (most recent call last):
File "so.py", line 9, in <module>
r = api.GetSearch(term=u"您说英语吗")
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/python_twitter-0.8.7-py2.7.egg/twitter.py", line 2419, in GetSearch
json = self._FetchUrl(url, parameters=parameters)
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/python_twitter-0.8.7-py2.7.egg/twitter.py", line 4041, in _FetchUrl
url = req.to_url()
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/oauth2-1.5.211-py2.7.egg/oauth2/__init__.py", line 440, in to_url
urllib.urlencode(query, True), fragment)
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib.py", line 1337, in urlencode
l.append(k + '=' + quote_plus(str(elt)))
UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-4: ordinal not in range(128)
Source: (StackOverflow)