EzDevInfo.com

jsonpickle

Python library for serializing any arbitrary object graph into JSON. It can take almost any Python object and turn the object into JSON. Additionally, it can reconstitute the object back into Python. jsonpickle Documentation — jsonpickle 0.8.0 documentation

How to exclude specific fields on serialization with jsonpickle?

I'm using SQLAlchemy extension with Flask. While serializing my models (which are also used for database operations) using jsonpickle, I want some specific attributes to be ignored. Is there a way that allows me to set those rules?

SQLAlchemy adds an attribute named _sa_instance_state to the object. In a word, I do not want this field to be in the JSON output.


Source: (StackOverflow)

google app engine jsonpickle

Has anyone got jsonpickle working on the google app engine? My logs say there is no module but there is a module as sure as you're born. i'm using jsonpickle 0.32.

<type 'exceptions.ImportError'>: No module named jsonpickle
Traceback (most recent call last):
  File "/base/data/home/apps/xxxxx/xxxxxxxxxxxxxxxxx/main.py", line 4, in <module>
    import jsonpickle

Source: (StackOverflow)

Advertisements

python jsonpickle encode without object name

I was wondering how I can get jsonpickle to omit the "py/object": object/class description in its output ?

In other words, here is what I get now:

{"py/object": "mymodule.myclasses.myClass", "propertyA": "", "propertyB": null, "propertyC": 100, "listProperty": []}

I would like ti to be:

{"propertyA": "", "propertyB": null, "propertyC": 100, "listProperty": []}

I am using:

jsonpickle.encode(myListOfObjects)

Source: (StackOverflow)

jsonpickle force using __dict__ due to type evolution

This question is related to type evolution with jsonpickle (python)

Current state description:

I need to store an object to a JSON file using jsonpickle in python.

The object class CarState is generated by a script from another software component thus I can't change the class itself. This script automatically generates the __getstate__ and __setstate__ methods for the class that jsonpickle uses for serializing the object. The __getstate__ returns just a list of the values for each member variable, without the field names. Therefore jsonpickle doesn't store the field name, but only the values within the JSON data (see code example below)

The Problem:

Let's say my program needs to extend the class CarState for a new version (Version 2) by an additional field (CarStateNewVersion). Now If it loads the JSON data from version 1, the data isn't assigned to the correct fields.

Here's an example code demonstrating the problem. The class CarState is generated by the script and simplified here to show the problem. In Version 2 I update the class CarState with a new field (in the code snipped inserted as CarStateNewVersion to keep it simple)

#!/usr/bin/env python
import jsonpickle as jp

# Class using slots and implementing the __getstate__ method
# Let's say this is in program version 1
class CarState(object):
    __slots__ = ['company','type']
    _slot_types = ['string','string']

    def __init__(self):
        self.company = ""
        self.type = ""

    def __getstate__(self):
        return [getattr(self, x) for x in self.__slots__]

    def __setstate__(self, state):
        for x, val in zip(self.__slots__, state):
            setattr(self, x, val)

# Class using slots and implementing the __getstate__ method
# For program version 2 a new field 'year' is needed           
class CarStateNewVersion(object):
    __slots__ = ['company','year','type']
    _slot_types = ['string','string','string']

    def __init__(self):
        self.company = ""
        self.type = ""
        self.year = "1900"

    def __getstate__(self):
        return [getattr(self, x) for x in self.__slots__]

    def __setstate__(self, state):
        for x, val in zip(self.__slots__, state):
            setattr(self, x, val)

# Class using slots without the __getstate__ method
# Let's say this is in program version 1            
class CarDict(object):
    __slots__ = ['company','type']
    _slot_types = ['string','string']

    def __init__(self):
        self.company = ""
        self.type = ""

# Class using slots without the __getstate__ method
# For program version 2 a new field 'year' is needed      
class CarDictNewVersion(object):
    __slots__ = ['company','year','type']
    _slot_types = ['string','string','string']

    def __init__(self):
        self.company = ""
        self.type = ""
        self.year = "1900"



if __name__ == "__main__":

    # Version 1 stores the data
    carDict = CarDict()
    carDict.company = "Ford"
    carDict.type = "Mustang"
    print jp.encode(carDict)
    # {"py/object": "__main__.CarDict", "company": "Ford", "type": "Mustang"}

    # Now version 2 tries to load the data
    carDictNewVersion = jp.decode('{"py/object": "__main__.CarDictNewVersion", "company": "Ford", "type": "Mustang"}')
    # OK!
    # carDictNewVersion.company = Ford
    # carDictNewVersion.year = undefined
    # carDictNewVersion.type = Mustang


    # Version 1 stores the data
    carState = CarState()
    carState.company = "Ford"
    carState.type = "Mustang"
    print jp.encode(carState)
    # {"py/object": "__main__.CarState", "py/state": ["Ford", "Mustang"]}

    # Now version 2 tries to load the data    
    carStateNewVersion = jp.decode('{"py/object": "__main__.CarStateNewVersion", "py/state": ["Ford", "Mustang"]}')
    # !!!! ERROR !!!!
    # carDictNewVersion.company = Ford
    # carDictNewVersion.year = Mustang
    # carDictNewVersion.type = undefined
    try:
        carDictNewVersion.year
    except:
        carDictNewVersion.year = 1900

As you can see for the CarDict and CarDictNewVersion class, if __getstate__ isn't implemented, there's no problem with the newly added field because the JSON text also contains field names.

Question:

Is there a possibility to tell jsonpickle to not use __getstate__ and use the __dict__ instead to include the field names within the JSON data? Or is there another possibility to somehow include the field names?

NOTE: I can't change the CarState class nor the containing __getstate__ method since it is generated through a script from another software component. I can only change the code within the main method.

Or is there another serialization tool for python which creates human readable output and includes field names?


Additional Background info: The class is generated using message definitions in ROS, namely by genpy , and the generated class inherits from the Message class which implements the __getstate__ (see https://github.com/ros/genpy/blob/indigo-devel/src/genpy/message.py#L308)


Source: (StackOverflow)

jsonpickle saving data but loading as None

I have been working on a save file for my RPG and it seems that when I create my original file it will save but when I rerun it in my shell it will load the file but simply load it as None instead of the object and its data. I don't really know if I should use a different saved approach or if I am just simply missing an easy line of code.

import weapon, shield, furyblade, monster, human, jsonpickle, os, sys
################GAMESAVE#######################
MOUNTEVIL_FILENAME = "MountOfEvilGamesave.json"
gameState = dict()

def loadGame():
    with open(MOUNTEVIL_FILENAME, 'r') as savegame:
        state = jsonpickle.decode(savegame.read())
    print state
    return state

def saveGame():
    global gameState
    with open(MOUNTEVIL_FILENAME, 'w') as savegame:
        savegame.write(jsonpickle.encode(gameState))

def charCreate():
    newPlayer = raw_input("What's your name adventurer?  ")
    gameState[newPlayer] = human.Human(newPlayer)
    gameState[newPlayer].name = newPlayer

def initializeGame():
    player = charCreate()
    state = dict()
    state['characters'] = [player]
    return state
###################MAIN#####################      
def main():
    global gameState

    if not os.path.isfile(MOUNTEVIL_FILENAME):
        gameState = initializeGame()
        saveGame()
    else:
        gameState =loadGame()
    print gameState

Source: (StackOverflow)

How do I find the current temperature of a city with NCDC API v2?

I'm a beginner with APIs but I feel this one is still a bit more vast and complex.

I want to find the air temperature of Boston. Or really, output any useful weather data.

Using this URL in JSON I can find the location of Boston: http://www.ncdc.noaa.gov/cdo-web/api/v2/locations/CITY:US250002

Response:

{"mindate":"1885-01-01","maxdate":"2015-07-25","name":"Boston, MA US","datacoverage":1,"id":"CITY:US250002"}

Using this I can find the data category of "Air Temperature": http://www.ncdc.noaa.gov/cdo-web/api/v2/datacategories/TEMP

Response:

{"name":"Air Temperature","id":"TEMP"}

This gives me no helpful information, so here is my effort to combine the two: http://www.ncdc.noaa.gov/cdo-web/api/v2/datacategories/TEMP?locationid=CITY:US250002 (Air Temperature in Boston)

Response:

{"name":"Air Temperature","id":"TEMP"}

Normally when I enter an API all the information is there and available to filter via parameters. Here it seems the data is all divided. It will show you all the locations, all the data sets, all the categories that exist in the API, but how do I see the actual meat of the data? Ex. Current water temperature of Chicago, IL? Air Temperature on 3/14/2014 in Los Angeles, CA?

Here's my jsfiddle I'm using: http://jsfiddle.net/f98dauaz/1/


Source: (StackOverflow)

jsonpickle serialize object/list to a clean json string

how to get an clean json string using jsonpickle. the output has lots of addtional fields that not in my class,for example,"py/reduce","_state","_django_version" and so on.

i just want a clean output like this:

[
  {"name":"namevalue","id":"4","expiredtime":"2015-3-4 12:0000"},
  {"name":"namevalue2","id":"5","expiredtime":"2015-4-4 12:0000"}
]

i have tried add unpicklable=False ,but not work.

 item_list=list(ChannelItem.objects.filter(channel__id=channel_id))
 results =[jsonpickle.encode(ob,unpicklable=False) for ob in item_list]

what i have missed? does jsonpickle can't serialize an object/objectlist to an clean jsonstring that just contains the fields defined in the class? or is there an alternative package to do this?


Source: (StackOverflow)

Generic Python Object Serialization for Docker Integration

I am working on a project with the objective of separating the processes of training and testing in machine learning projects. I designed the code to wrap the used model, and by model I mean a classifier for instance, in the class Model.

class Model: def init(self, newModel): self.model = newModel

Then I pass the function objects the model has to provide using a list:

def addFunctions(self,functions): for function in functions: self.functions[function.__name_ _] = function

Now that model can be used for classification for instance by constructing it with a classifier object and passing its functions in a list to addFunctions so that I can invoke them. Then I package the model and the code in a docker container. To simplify what it does, it is a lightweight virtual machine.

The purpose of the separation is to just pass the trained model to the docker container after optimizing it without the need of passing the whole code. Thus, the need for saving/serializing the Python Model arises.

I tried using pickle as well as jsonpickle, but both of them had limitations when serializing certain types of objects. I could not find any alternative that is generic enough for object storage and retrieval. Are there any alternatives?


Source: (StackOverflow)

How to convert a python object from a C++ library to JSON?

I created a shared library using boost.python and py++. I can instantiate objects from types defined in the library. I want to encode/decode these objects via json. I use jsonpickle module. But, it doesn't encode attributes. I did some research. Most probably the problem occurs because encoded object's __dict__ is empty.

Sample class in the shared library:

struct Coordinate
{
    int x;
    int y;
    Coordinate(int aX, int aY);
}; 

This is python wrapper:

BOOST_PYTHON_MODULE(pyplusplus_test){
    bp::class_< Coordinate >( "Coordinate", bp::init< int, int >(( bp::arg("aX"), bp::arg("aY") )) )
        .enable_pickling()
        .def_readwrite( "x", &Coordinate::x )
        .def_readwrite( "y", &Coordinate::y );
  //...
}

Code piece from python:

cord = pyplusplus_test.Coordinate(10,10)
cord.x = 23
cord.y = -11
tmpStr = jsonpickle.encode(cord)
print tmpStr

And, the output:

{"py/object": "pyplusplus_test.Coordinate"}

Notice that there is no x or y in json output.

Any suggestion?

Thanks.


Source: (StackOverflow)

Overwrite self from json

I am writing a program that should run more or less continously. If I terminate the program, or if it throws an error, I want all object instances to be saved. To do so, I save all instances using jsonpickle. When the program continues, I want to continue exactly where I left off. My idea was to do something like the following:

class A(object):
    def __init__(self):
       try:
            with open('A.json', 'r') as fp:
                 self = jsonpickle.decode(json.load(fp))
       except IOError:
            self.X = 'defaultvalue'
            self.Y = 'defaultvalue'

Where A.json contains an instance of A previously saved using jsonpickle (that part works). However, self is not overwritten by my code.

I suspect that I have to implement what I want to do in __new__. I read the documentation, however I am a bit lost. I would appreciate any advice on how to implement what I want in a good way.


Source: (StackOverflow)

Used jsonpickle to serialize Twitter search API results in Python, can't decode

I wasn't able to serialize Twitter API results until I used jsonpickle (that's code A below), but then I wasn't able to decode the json file (code B below). Code A created one large json object. Please help.

Code A

#!/usr/bin/env python

import tweepy
import simplejson as json
import jsonpickle

consumer_key        = "consumer key here"
consumer_secret     = "consumer secret here"
access_token        = "access token here"
access_token_secret = "access token secret here"

auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)

api = tweepy.API(auth, parser=tweepy.parsers.ModelParser())

jsonFile = open('/location_here/results.json', 'wb')

for status in tweepy.Cursor(api.search, q="keyword").items():
    print status.created_at, status.text
    frozen = jsonpickle.encode(status)
    s = json.loads(frozen)
    json.dump(s, jsonFile)

Code B

import jsonpickle
import simplejson as json

in_file = open("/location_here/results.json")
out_file = open("/location_here/data.json", 'wb')

jsonstr = in_file.read()

thawed = jsonpickle.decode(jsonstr)

data = json.loads(thawed)

This gives an error, ValueError: Trailing data.

Thanks.


Source: (StackOverflow)

Serialise child collections in Python (with jsonpickle)

I'd like to serialise a python list that contains nested lists. The code below constructs the object to be serialised from a gnome keyring but the jsonpickle encoder, doesn't serialise the child lists. With unpicklable=True, I simply get:

[{"py/object": "__main__.Collection", "label": ""}, {"py/object": "__main__.Collection", "label": "Login"}]

I've experimented with setting/not setting max_depth and tried lots of depth numbers, but regardless, the pickler will only pickle the top level items.

How do I make it serialise the entire object structure?

#! /usr/bin/env python

import secretstorage
import jsonpickle

class Secret(object):
    label = ""
    username = ""
    password = ""

    def __init__(self, secret):
        self.label = secret.get_label()
        self.password = '%s' % secret.get_secret()
        attributes = secret.get_attributes()
        if attributes and 'username_value' in attributes:
            self.username = '%s' % attributes['username_value']

class Collection(object):
    label = ""
    secrets = []

    def __init__(self, collection):
        self.label = collection.get_label()
        for secret in collection.get_all_items():
            self.secrets.append(Secret(secret))


def keyring_to_json():
    collections = []
    bus = secretstorage.dbus_init()
    for collection in secretstorage.get_all_collections(bus):
        collections.append(Collection(collection))

    pickle = jsonpickle.encode(collections, unpicklable=False);
    print(pickle)


if __name__ == '__main__':
    keyring_to_json()

Source: (StackOverflow)

Write data directly to a tar archive

I am looking for a way in which I can pickle some Python objects into a combined tar archive. Further I also need to use np.save(....) to save some numpy arrays in yet the same archive. Of corse, I also need to read them later.

So what I tried is

a = np.linspace(1,10,10000)    
tar = tarfile.open(fileName, "w")
tarinfo = tarfile.TarInfo.frombuf(np.save(a, fileName))
tar.close()

and I get the error:

'numpy.ndarray' object has no attribute 'write'

Simlar problems I get if I pickle an object in the tar-file. Any suggestions? If it is easier, json-pickle would also work.

EDIT: as mentioned in the comments I confused the arguments of np.save(). However, this does not solve the issue, as now I get the error:

object of type 'NoneType' has no len()

EDIT 2: If there is no solution to the above problem, do you know of any other way of time efficiently boundle files?


Source: (StackOverflow)

jsonpickle/json function input utf-8, output unicode?

Wrote the following two functions for storing and retrieving any Python (built-in or user-defined) object with a combination of json and jsonpickle (in 2.7)

def save(kind, obj):
    pickled = jsonpickle.encode(obj)
    filename = DATA_DESTINATION[kind] \\returns file destination to store json
    if os.path.isfile(filename):
        open(filename, 'w').close()
    with open(filename, 'w') as f:
        json.dump(pickled, f)

def retrieve(kind):
    filename = DATA_DESTINATION[kind] \\returns file destination to store json
    if os.path.isfile(filename):
        with open(filename, 'r') as f:
            pickled = json.load(f)
            unpickled = jsonpickle.decode(pickled)
            print unpickled

I haven't tested these two functions with user-defined objects, but when i attempt to save() a built-in dictionary of strings, (ie. {'Adam': 'Age 19', 'Bill', 'Age 32'}), and i retrieve the same file, i get the same dictionary back in unicode, {u'Adam': u'Age 19', u'Bill', u'Age 32'}. I thought json/jsonpickle encoded by default to utf-8, what's the deal here?

[UPDATE]: Removing all jsonpickle encoding/decoding does not effect output, still in unicode, seems like an issue with json? Perhaps I'm doing something wrong.


Source: (StackOverflow)

type evolution with jsonpickle (python)

Is there any support of this in jsonpickle?

E.g. I store and object, them modify its schema, then try to load it back.

The following change, for instance, (attribute addition)

import jsonpickle

class Stam(object):

   def __init__(self, a):
     self.a = a

   def __str__(self):
     return '%s with a=%s' % (self.__class__.__name__, str(self.a))


js = jsonpickle.encode(Stam(123))
print 'encoded:', js

class Stam(object):

   def __init__(self, a, b):
     self.a = a
     self.b = b

   def __str__(self):
     return '%s with a=%s, b=%s' % (self.__class__.__name__, str(self.a), str(self.b))

s=jsonpickle.decode(js)
print 'decoded:', s

produces an error:

encoded: {"py/object": "__main__.Stam", "a": 123}
decoded: Traceback (most recent call last):
  File "C:\gae\google\appengine\ext\admin\__init__.py", line 317, in post
    exec(compiled_code, globals())
  File "<string>", line 25, in <module>
  File "<string>", line 22, in __str__
AttributeError: 'Stam' object has no attribute 'b'

Source: (StackOverflow)