QueWorx https://www.queworx.com Wed, 10 Jul 2024 20:19:48 +0000 en-US hourly 1 https://wordpress.org/?v=5.4.16 https://www.queworx.com/wp-content/uploads/2020/02/cropped-queworx_icon3-32x32.png QueWorx https://www.queworx.com 32 32 How to Create a Webhook Listener? https://www.queworx.com/blog/how-to-create-a-webhook-listener/ Sat, 15 Jun 2024 16:30:18 +0000 https://www.queworx.com/?p=3411 Discover how to create a webhook listener with this beginner-friendly guide. Learn to set up on both web servers and serverless platforms for real-time data processing.

The post How to Create a Webhook Listener? appeared first on QueWorx.

]]>
Say you want your application to instantly know when an event occurs—like when data changes or an update is made. That’s where webhooks come into play. Webhooks are automated messages sent from one application to another, triggered by specific events. They allow you to receive and act on real-time information as it happens.

For example, imagine a scenario where a customer completes a purchase on your ecommerce platform. A webhook could be triggered to send purchase details to your script, that will then use this information to fetch additional customer data from a customer data platform and update the customer’s rewards points in a third-party app like Yotpo.

A webhook listener is the code that listens for these incoming webhooks, allowing your applications to take immediate action based on that information. Let’s take a look at the different ways we can set up a webhook listener.

Creating a Webhook Listener

There are two main approaches to setting up a webhook listener: using a web server or opting for serverless computing.

Web Server

A web server runs continuously, listening for incoming webhook events. The simplest way to set up a web server is to lease a virtual machine from a cloud hosting provider such as AWS or DigitalOcean. You would then choose an operating system, usually a reliable Linux distribution like Ubuntu or CentOS. Once your VM is up, you would install web server software such as Nginx or Apache to handle HTTP requests, configure SSL to secure your server with SSL/TLS, and set up your application to receive and process the webhooks.

The primary advantage of using a web server is the complete control it offers over both your application and the server environment. You can customize configurations to suit your specific needs, which is not always possible with serverless, as it often imposes limitations on software, hardware, and execution duration.

Advantages:

  • Full Control and Customization: Complete control over the server environment, configurations, and ability to run customized software and modify hardware settings.

Disadvantages:

  • Complex Setup: Requires configuration of operating systems, web servers, and SSL certificates.
  • Continuous Costs: Incur costs for running servers 24/7, regardless of demand.
  • Maintenance Required: Regular updates and maintenance are necessary to ensure security and performance.

Serverless

On the other hand, serverless computing allows you to execute code without the complexities of server management. The setup process typically involves just writing your webhook handling code and deploying it to a serverless platform.

Deploying serverless code is notably simpler and quicker. In many instances, you simply write the code, deploy it, and all other aspects—such as server management, SSL certificate setup and renewal, and system maintenance—are handled automatically.

Serverless computing can also be more cost-effective, particularly for applications with infrequent requests. You only pay for the compute resources you actually use. Conversely, a traditional web server continuously runs and consumes resources regardless of demand.

Finally, serverless platforms manage scaling automatically, seamlessly adding servers and balancing loads as needed – something that would take a massive amount of work in a traditional server setup.

Advantages:

  • Simplicity: No need to manage servers, operating systems, or SSL configurations.
  • Cost-Effective: Pay only for the compute time you use, reducing costs for low-usage applications.
  • Automatic Scaling: Handles increases in demand without manual intervention.
  • Maintenance-Free: All server maintenance and updates are managed by the provider.

Disadvantages:

  • Cold Starts: Potential delays when functions are invoked after idle periods.
  • Runtime Limitations: Functions have maximum execution time limits, which may not suit all applications.

Ultimately, the decision between using a web server and going for serverless computing depends on your application’s specific requirements and operational needs. For straightforward tasks such as webhook listening, serverless is often the ideal choice due to its simplicity. We will explore how to set up a serverless webhook listener in the next section.

Using CodeUpify to listen for webhooks

codeupify.com is a serverless platform that makes it pretty simple to get a webhook listener up and running. Here are the steps to set up a sample Python webhook listener:

  1. Create an account
  2. Create a function
    1. Select Python as your language
    2. Select Async for concurrency
    3. Add necessary environment variables (such as API keys required for interacting with other services)
    4. Add the libraries your function will need
  3. Add the code with your webhook listener logic
import requests

def handler(request):
	# Parse the incoming request
	# Send a request to a third party service
	# ...
  1. Once you click save, CodeUpify will deploy your function and provide a URL

The URL you receive is the endpoint for your webhook listener. Simply set this URL as the target in whichever service is sending webhooks.

Conclusion

That’s it! Once your webhook is set up, make sure to thoroughly test it across various scenarios to ensure it handles all expected events accurately. Try to simulate different conditions and payloads to verify that your listener responds correctly. Additionally, ongoing monitoring of your webhook is essential to quickly identify and resolve any errors or performance issues that may arise over time.

The post How to Create a Webhook Listener? appeared first on QueWorx.

]]>
Django REST Framework – A Complete Guide https://www.queworx.com/blog/django-rest-framework-a-complete-guide/ Wed, 21 Oct 2020 02:55:29 +0000 https://www.queworx.com/?p=3351 Table of Contents What is Django REST framework? Why should you use Django REST framework? Why shouldn’t you use Django REST framework Alternatives to Django REST framework Getting Started Installation Basic Example – Model CRUD API◘ Django views and class based views Object update example Serialization Referencing other models Nested serializers Validation Authentication and authorization …

Django REST Framework – A Complete Guide Read More »

The post Django REST Framework – A Complete Guide appeared first on QueWorx.

]]>
Table of Contents
  1. What is Django REST framework?
  2. Why should you use Django REST framework?
  3. Why shouldn’t you use Django REST framework
  4. Alternatives to Django REST framework
  5. Getting Started
    1. Installation
    2. Basic Example – Model CRUD API
  6. Django views and class based views
    1. Object update example
  7. Serialization
    1. Referencing other models
    2. Nested serializers
    3. Validation
  8. Authentication and authorization
    1. Authentication
    2. Authorization
  9. Backend filtering
    1. Filtering
    2. Generic filtering
    3. Generic ordering
  10. Pagination, caching, throttling, versioning and documentation
    1. Pagination
    2. Caching
    3. Throttling
    4. Versioning
    5. Documentation
  11. Final thoughts

What is Django REST framework?

Django REST framework is the de facto library for building REST APIs in Django. It’s been around since 2011, and in that time has been used in thousands of projects with over a thousand contributors. It’s currently used by many large companies, including Robindhood, Mozilla, Red Hat, and Eventbrite.

Why should you use Django REST framework?

If you are using Django as your web framework and you need to write a REST API, the Django REST framework is the default choice to get the job done. It’s by far the most popular Django library for writing REST APIs, it’s well maintained and supported. It also comes with many features out of the box to simplify API development:

  • A web browsable API – where you can browse and interact with your API
  • Built in authentication and authorization, including packages for OAuth1a and OAuth2
  • Built in serialization for Django models and other data, with input validation
  • Easy backend filtering and sorting of data
  • Support for throttling of requests
  • Support for easy pagination of results
  • Support for versioning
  • Support for API schemas and documentation
  • Lots of documentation and support of a large community

Why shouldn’t you use Django REST framework?

If you are using Django and REST APIs, it’s a no-brainer, you should use Django REST framework. But over the past few years another API type started gaining a lot of traction – GraphQL. If you are going to be writing a GraphQL API, it doesn’t make sense to use the Django REST framework, you should take a look at Graphene Django instead.

Alternatives to Django REST framework

Django REST framework has pretty much come to dominate Django REST API, but here are some other alternatives:

Django Tastypie

It looks like another complete Django REST API library. People that have used it seem to say lots of positive things about it. Unfortunately, the project stopped being maintained, and is not under active development anymore.

Django Restless

From the creator of Django Tastypie, this is a small, flexible REST API library. Where Django REST framework has evolved to be a big library that can accommodate pretty much everyone, Django Restless just tries to do a few things really well, without adding any bloat. If you like to tinker more and want something really fast and flexible, this might be the library for you.

Getting Started

Installation

To start using the Django REST Framework you need to install the djangorestframework package:

pip install djangorestframework

Add rest_framework to INSTALLED_APPS settings

INSTALLED_APPS = [
    ...
    'rest_framework',
]

That should be enough to get you started

Basic Example – Model CRUD API

Django REST framework makes it very easy to create a basic API that works with Django models. With a few lines of code we can create an API that can list our objects and do basic CRUD. Let’s take a look at an example with some basic models.

models.py

class Author(models.Model):
   name = models.CharField(max_length=255)

class Book(models.Model):
   author = models.ForeignKey(Author, on_delete=models.CASCADE)
   title = models.CharField(max_length=255)

Serializers allow the conversion of querysets and model instances into data types that can be rendered as Content Type (JSON, XML, etc) and the other way around.

serializers.py

class BookSerializer(serializers.ModelSerializer):
   class Meta:
       model = Book
       fields = ['author', 'title', 'num_pages']

views.py

from rest_framework import viewsets

from book.models import Book
from book.serializers import BookSerializer

class BookViewset(viewsets.ModelViewSet):
   queryset = Book.objects.all()
   serializer_class = BookSerializer

And we let the REST framework wire up the url routes based on common conventions.

urls.py

router = routers.DefaultRouter()
router.register(r'', views.BookViewset)

urlpatterns = [
   path('', include(router.urls)),
]

Going to http://127.0.0.1:8000/book gives us:

viewset list

Here we can see a list of books with a GET request and can create a new book with a POST request. The Browsable API gives us a nice human browsable display and forms that we can play around with.

If we go to http://127.0.0.1:8000/book/1/, we see that a GET request to this url will give us details about the book with ID 1. A PUT request will modify that book’s data. And a DELETE request will delete the book with ID `1`.

viewset instance

Since we are requesting Content-Type text/html we are receiving the Browsable API, human friendly template. If we were to ask for Content-Type application/json we would just be getting the JSON. You can also set the format explicitly in your browser like so:

http://127.0.0.1:8000/book/1/?format=json

Response:

{"author":2,"title":"To Kill a Mockingbird","num_pages":281}

As you can see Django REST framework makes it very easy for us to create a basic model CRUD API.

Django views and class based views

As we saw in the basic example, Django REST framework makes model CRUD really simple. How do we go about writing some custom API calls? Let’s say we wanted to search the books from the basic example by author and title.

Here’s a basic Django view method for searching books:

views.py

@api_view(['GET'])
def book_search(request):
   author = request.query_params.get('author', None)
   title = request.query_params.get('title', None)

   queryset = Book.objects.all()
   if author:
       queryset = queryset.filter(author__name__contains=author)
   if title:
       queryset = queryset.filter(title__contains=title)

   serializer = BookSerializer(queryset, many=True)
   return Response(serializer.data)

urls.py

urlpatterns = [
   path('book-search', views.book_search, name='book_search'),
]

The code overall looks pretty similar to the standard Django view, with just a few modifications. It’s wrapped in the api_view decorator. This decorator passes a REST framework Request object and modifies the context of the returned REST framework Response object. We are using request.query_params instead of request.GET, and would use request.data instead of request.POST. And finally it uses a serializer to return a response, which will return the right content type to the client.

If we wanted to use class based views to facilitate code reuse we could modify the above code like so:

views.py

class BookSearch(APIView):

   def get(self, request, format=None):
       author = self.request.query_params.get('author', None)
       title = self.request.query_params.get('title', None)

       queryset = Book.objects.all()
       if author:
           queryset = queryset.filter(author__name__contains=author)
       if title:
           queryset = queryset.filter(title__contains=title)

       serializer = BookSerializer(queryset, many=True)
       return Response(serializer.data)

urls.py

urlpatterns = [
   path('book-search-view', views.BookSearch.as_view()),
]

Of course the REST framework has a bunch of reusable view classes and mixins you can use. For example, for the above example you might want to use ListAPIView. If you wanted to customize the Book CRUD code, instead of using the ViewSet from the basic example, you might want to combine a variation of ListModelMixin, CreateModelMixin, RetrieveModelMixin, UpdateModelMixin, and DestroyModelMixin

GenericAPIView is a common view that adds some common functionality and behavior to the base REST framework APIView class. With this class you can override some attributes to get the desired behavior:

  • queryset or override get_queryset() to specify the objects that should come back from the view
  • serializer_class or override get_serializer_class() to get the serializer class to use for the object
  • pagination_class to specify how pagination will be used
  • filter_backends – backends to use to filter the request, we go over backend filtering below

Here we use ListAPIView (which extends GenericAPIView and ListModelMixin) to create our book search:

views.py

class BookSearch(ListAPIView):

    serializer_class = BookSerializer

    def get_queryset(self):
        author = self.request.query_params.get('author', None)
        title = self.request.query_params.get('title', None)
        queryset = Book.objects.all()
        if author:
            queryset = queryset.filter(author__name__contains=author)
        if title:
            queryset = queryset.filter(title__contains=title)

        return queryset

Object update example

Let’s say that we had to write an API where we had to have someone update the status of a book’s read state:

model.py

class UserBook(models.Model):
   STATUS_UNREAD = 'u'
   STATUS_READ = 'r'
   STATUS_CHOICES = [
       (STATUS_UNREAD, 'unread'),
       (STATUS_READ, 'read'),
   ]

   book = models.ForeignKey(Book, on_delete=models.CASCADE)
   user = models.ForeignKey(get_user_model(), on_delete=models.CASCADE)
   state = models.CharField(max_length=1, choices=STATUS_CHOICES, default=STATUS_UNREAD)

serializers.py

class UserBookSerializer(serializers.ModelSerializer):
   class Meta:
       model = UserBook
       fields = ['book', 'user', 'status']

We want to limit the field that they can change to just the status. Ideally, we would validate that the user has permission to change this specific book, but we’ll get to that in the authentication/authorization section.

views.py

class BookStatusUpdate(UpdateAPIView):
   queryset = UserBook.objects.all()
   serializer_class = UserBookSerializer
   permission_classes = (permissions.IsAuthenticated,)

   def update(self, request, *args, **kwargs):
       instance = self.get_object()
       data = {'status': request.data.get('status')}
       serializer = self.get_serializer(instance, data, partial=True)
       serializer.is_valid(raise_exception=True)
       self.perform_update(serializer)

       return Response(serializer.data)

Serialization

So far we used very simple, automatic serialization by just listing the fields. REST Framework serializers are similar to Django Forms and give us a lot of control by specifying attributes and overriding various methods.

For our BookSerializers we could have listed out the fields with type, requirement, max_length, etc.

serializers.py

class BookSerializer(serializers.ModelSerializer):
   title = serializers.CharField(required=True, max_length=100)
   num_pages = serializers.IntegerField(read_only=True)

   class Meta:
       model = Book
       fields = ['author', 'title', 'num_pages']

We could also override create() and update() methods to be able to execute some custom functionality:

serializers.py

class BookSerializer(serializers.ModelSerializer):
   title = serializers.CharField(required=True, allow_blank=True, max_length=100)
   num_pages = serializers.IntegerField(read_only=True)

   def create(self, validated_data):

       # Custom code

       return Book.objects.create(**validated_data)

   def update(self, instance, validated_data):

       # Custom Code

       instance.title = validated_data.get('title', instance.title)
       instance.code = validated_data.get('num_pages', instance.code)
       instance.save()
       return instance


   class Meta:
       model = Book
       fields = ['author', 'title', 'num_pages']

Working with serializers is very similar to how we work with Django forms, we validate the serializer and then call save(), saving the instance. Here’s a serializer that validates the title of the book.

serializers.py

class BookSerializer(serializers.ModelSerializer):
   title = serializers.CharField(max_length=100)

   def validate_title(self, value):
       if len(value) < 4:
           raise serializers.ValidationError("Title is too short")

       return value

   class Meta:
       model = Book
       fields = ['author', 'title', 'num_pages']

views.py

serializer = BookSerializer(data=data)
if serializer.is_valid():
	serializer.save()

Referencing other models

You can reference other entities in various ways:

  • Using the primary key: PrimaryKeyRelatedField
  • Using hyperlinking (the api endpoint url for the other entity): HyperlinkedRelatedField
  • Using the string representation of the object: StringRelatedField
  • Using an identifying slug field on the related entity: SlugRelatedField
  • Nesting the related entity inside the parent representation: We’ll discuss that more below

For example, here’s how an Author on our Book might look if we were to just use PrimaryKeyRelatedField

{
    "author": 2,
    "title": "To Kill a Mockingbird",
    "num_pages": 281
}

Nested serializers

Serializers can be nested, this way we can work on multiple objects in one operation, like getting all the information about the Book as well as the Author in a GET request:

serializers.py

class AuthorSerializer(serializers.ModelSerializer):
   name = serializers.CharField(max_length=255)

   class Meta:
       model = Author
       fields = ['name']

class BookSerializer(serializers.ModelSerializer):
   title = serializers.CharField(max_length=255)
   author = AuthorSerializer()

   class Meta:
       model = Book
       fields = ['author', 'title', 'num_pages']

http://127.0.0.1:8000/book/1/ returns

{
    "author": {
        "name": "Harper Lee"
    },
    "title": "To Kill a Mockingbird",
    "num_pages": 281
}

To be able to create and update a nested relationship in one request you will need to modify create() and update(), they will not work with nested fields out of the box. The reason for this is that the relationship between models is complicated and based on specific application requirements. It’s not something that can be set up automatically; your logic will have to deal with deletions, None objects, and so on. 

Here’s how you might handle create() in our simple example:

serializers.py

class BookSerializer(serializers.ModelSerializer):
   title = serializers.CharField(max_length=255)
   author = AuthorSerializer()

   def create(self, validated_data):
       author_data = validated_data.pop('author')
       author = Author.objects.create(**author_data)
       book = Book.objects.create(author=author, **validated_data)
       return book


   class Meta:
       model = Book
       fields = ['author', 'title', 'num_pages']

Doing a POST to http://127.0.0.1:8000/book with

{
    "author": {
        "name": "John1"
    },
    "title": "Book by John1",
    "num_pages": 10
}

Will now create both an author and a book.

Here’s how we might handle a simple update()

serializers.py

def update(self, instance, validated_data):
   author_data = validated_data.pop('author')
   author = instance.author

   instance.title = validated_data.get('title', instance.title)
   instance.num_pages = validated_data.get('num_pages', instance.num_pages)
   instance.save()

   author.name = author_data.get('name', author.name)
   author.save()

   return instance

A PATCH or PUT call to http://127.0.0.1:8000/book/8/ (that’s the id of this particular book), with

{
    "author": {
        "name": "John1_mod"
    },
    "title": "Book by John1_mod",
    "num_pages": 20
}

Will modify our book with the new author, title, and num_pages.

Validation

Validation in the REST framework is done on the serializer. Just like with Django forms you can set some basic validation on the fields themselves. In our example above, we had:

serializers.py

class AuthorSerializer(serializers.ModelSerializer):

   class Meta:
       model = Author
       fields = ['name', 'email']

We can add individual fields with various requirements to enforce various rules:

serializers.py

class AuthorSerializer(serializers.ModelSerializer):
   name = serializers.CharField(max_length=255, required=True)
   email = serializers.EmailField(read_only=True,
                               validators=[UniqueValidator(queryset=Author.objects.all())])


   class Meta:
       model = Author
       fields = ['name', 'email']

Now name is required field, email is read only and unique.

Just like with forms, before saving a serializer, you should call is_valid() on it. If there are validation errors they will show up in serializer.errors as a dictionary.

serializer.errors
# {'email': ['Enter a valid e-mail address.'], 'created': ['This field is required.']}

When writing your serializer, you can do field level and object level validation. Field-level validation can be done like this:

class AuthorSerializer(serializers.ModelSerializer):

   def validate_email(self, value):
       if value.find('@mail.com') >= 0:
           raise serializers.ValidationError("The author can't have a mail.com address")
       return value

   class Meta:
       model = Author
       fields = ['name', 'email']

Object level validation can be done like this:

class AuthorSerializer(serializers.ModelSerializer):

   def validate(self, data):
       if data['email'].find(data['name']) >= 0:
           raise serializers.ValidationError("The author's email can't contain his name")

       return data

   class Meta:
       model = Author
       fields = ['name', 'email']

Authentication and authorization

Authentication

The default authentication scheme can be set globally with DEFAULT_AUTHENTICATION_CLASSES setting:

settings.py

REST_FRAMEWORK = {
    'DEFAULT_AUTHENTICATION_CLASSES': [
        'rest_framework.authentication.BasicAuthentication',
        'rest_framework.authentication.SessionAuthentication',
    ]
}

Or on a per view basis with authentication_classes:

views.py

class BookSearch(ListAPIView):
   queryset = Book.objects.all()
   serializer_class = BookSerializer
   authentication_classes = [SessionAuthentication, BasicAuthentication]
   permission_classes = [IsAuthenticated]


@api_view(['GET'])
@authentication_classes([SessionAuthentication, BasicAuthentication])
@permission_classes([IsAuthenticated])
def book_search(request):
	pass

There are four types of authentication schemes:

  • BasicAuthentication: Where the client sends the username and password in the request, not really suitable for production environments
  • TokenAuthentication: When the client authenticates, receives a token, and that token is then used to authenticate the client. This is good for separate clients and servers
  • SessionAuthentication: This is the standard django authentication scheme, where there is a server side session and the client passes the session id to the server
  • RemoteUserAuthentication: This scheme has the web server deal with authentication

For APIs, especially where the client is a separate application from the server, token authentication is the most interesting. To do token authentication with Django REST framework, you have to add rest_framework.authtoken to your INSTALLED_APPS.

settings.py

INSTALLED_APPS = [
    ...
    'rest_framework.authtoken'
]

Run migrations after adding this app.

In your application you will have to create a token for the user after they authenticate with a username and password, you can do it with this call:

views.py

token = Token.objects.create(user=...)

And then pass that token back to the client. The client will then include that Token in the HTTP headers like so:

Authorization: Token 9944b09199c62bcf9418ad846dd0e4bbdfc6ee4b

REST framework already has a built-in view for obtaining an auth token, obtain_auth_token. If the defaults work for you, you can wire this view in urls, and don’t have to write any of your own logic.

urls.py

from rest_framework.authtoken import views
urlpatterns += [
    path('api-token-auth/', views.obtain_auth_token)
]

Authorization

For authorization you can also set global and view level policies. For global you would set it in settings.py:

REST_FRAMEWORK = {
    'DEFAULT_PERMISSION_CLASSES': [
        'rest_framework.permissions.IsAuthenticated', # Allow only authenticated requests
        # 'rest_framework.permissions.AllowAny', # Allow anyone
    ]
}

And for views, you would use permission_classes:

views.py

class BookSearch(ListAPIView):
   queryset = Book.objects.all()
   serializer_class = BookSerializer
   authentication_classes = [SessionAuthentication, BasicAuthentication]
   permission_classes = [IsAuthenticated]


@api_view(['GET'])
@authentication_classes([SessionAuthentication, BasicAuthentication])
@permission_classes([IsAuthenticated])
def book_search(request):
	Pass

You can have a view that’s authenticated or read only like this:

permission_classes = [IsAuthenticated|ReadOnly]

For a full list of permissions take a look at the API Reference

You can also create custom permissions by extending permissions.BasePermission:

class CustomPermission(permissions.BasePermission):
   def has_permission(self, request, view):
       ip_addr = request.META['REMOTE_ADDR']
       blocked = Blocklist.objects.filter(ip_addr=ip_addr).exists()
       return not blocked

And then include it in your permission_classes:

views.py

class BookSearch(ListAPIView):
   queryset = Book.objects.all()
   serializer_class = BookSerializer
   authentication_classes = [SessionAuthentication, BasicAuthentication]
   permission_classes = [CustomPermission]

And finally, Django REST framework supports object level permissioning by calling check_object_permissions, it will then determine if the user has permissions on the Model itself.

Backend filtering

Filtering

Most of the time you want to filter the queryset that comes back. If you are using GenericAPIView, the simplest way to do that is to override get_queryset(). One common requirement is to filter out the queryset by the current user, here is how you would do that:

views.py

class UserBookList(ListAPIView):
   serializer_class = UserBookSerializer

   def get_queryset(self):
       user = self.request.user
       return UserBook.objects.filter(user=user())

Our BookSearch above, actually used query parameters (query_param) to do the filtration by overriding get_queryset().

Generic filtering

Django REST framework also lets you configure a generic filtering system that will use fields on the models to determine what to filter.

To get that set up, you need to first install django-filter

pip install django-filter

Then add django_filters to INSTALLED_APPS

INSTALLED_APPS = [
    ...
    'django_filters',
    ...

Then you can either add backend filters globally in your settings.py file

REST_FRAMEWORK = {
    'DEFAULT_FILTER_BACKENDS': ['django_filters.rest_framework.DjangoFilterBackend']
}

Or add it to individual class views

views.py

from django_filters.rest_framework import DjangoFilterBackend

class BookSearch(generics.ListAPIView):
    ...
    filter_backends = [DjangoFilterBackend]

Let’s modify our BookSearch example above with Django Backend Filtering. What we had above:

views.py

class BookSearch(APIView):

   def get(self, request, format=None):
       author = self.request.query_params.get('author', None)
       title = self.request.query_params.get('title', None)

       queryset = Book.objects.all()
       if author:
           queryset = queryset.filter(author__name__contains=author)
       if title:
           queryset = queryset.filter(title__contains=title)

       serializer = BookSerializer(queryset, many=True)
       return Response(serializer.data)

Let’s modify it to use Backend Filtering:

class BookSearch(ListAPIView):
   queryset = Book.objects.all()
   serializer_class = BookSerializer
   filter_backends = [DjangoFilterBackend]
   filterset_fields = ['author__name', 'title']

This gets us exact matches though, which isn’t exactly the same functionality. We can change it to SearchFilter filter to get us the same functionality as above:

class BookSearch(ListAPIView):
   queryset = Book.objects.all()
   serializer_class = BookSerializer
   filter_backends = [SearchFilter]
   filterset_fields = ['author__name', 'title']

Now we just call it with

http://127.0.0.1:8000/book/book-search-view?search=harper

And get back all the books that have “harper” in the title or author’s name.

Generic ordering

We can also order against specific fields like so:

views.py

class BookSearch(ListAPIView):
   queryset = Book.objects.all()
   serializer_class = BookSerializer
   filter_backends = [OrderingFilter]
   ordering_fields = ['title', 'author__name']

Letting someone order with a query like this:

http://127.0.0.1:8000/book/book-search-view?ordering=-title

Note, that if you don’t specify ordering_fields or set it to ‘__all__’ it will potentially expose fields that you don’t want someone to filter by, like passwords.

Pagination, caching, throttling, versioning and documentation

Pagination

Pagination can be set globally and per view level. To set it globally add it to the settings file:

settings.py

REST_FRAMEWORK = {
    'DEFAULT_PAGINATION_CLASS': 'rest_framework.pagination.LimitOffsetPagination',
    'PAGE_SIZE': 25

To set it on a view you can use the pagination_class attribute. You can create a custom pagination class by extending PageNumberPagination:

class StandardResultsSetPagination(PageNumberPagination):
    page_size = 100
    page_size_query_param = 'page_size'
    max_page_size = 1000

Caching

Caching is done by Django with method_decorator, cache_page and vary_on_cookie:

# Cache requested url for 2 hours
@method_decorator(cache_page(60*60*2), name='dispatch')
class BookSearch(ListAPIView):
   queryset = Book.objects.all()
   serializer_class = BookSerializer
   authentication_classes = [SessionAuthentication, BasicAuthentication]
   permission_classes = [CustomPermission]

vary_on_cookie can be used to cache the request for a user

Throttling

You can throttle (control the rate of requests) your API. To do it at a global state add these settings:

REST_FRAMEWORK = {
    'DEFAULT_THROTTLE_CLASSES': [
        'rest_framework.throttling.AnonRateThrottle',
        'rest_framework.throttling.UserRateThrottle'
    ],
    'DEFAULT_THROTTLE_RATES': {
        'anon': '100/day',
        'user': '1000/day'
    }
}

Or set it at the view level with throttle_classes, for example:

views.py

class BookSearch(ListAPIView):
   queryset = Book.objects.all()
   serializer_class = BookSerializer
   authentication_classes = [SessionAuthentication, BasicAuthentication]
   throttle_classes = [UserRateThrottle]

For throttling, clients are by default identified with the X-Forwarded-For HTTP header, and if not present then by the REMOTE_ADDR HTTP header.

Versioning

By default versioning is not enabled. You can set up versioning by adding this to your settings file:

REST_FRAMEWORK = {
    'DEFAULT_VERSIONING_CLASS': 'rest_framework.versioning.NamespaceVersioning'
}

If DEFAULT_VERSIONING_CLASS is None, which is the default, then request.version will be None.

It’s possible to set versioning on a specific view with versioning_class, but usually versioning is set globally.

You can control versioning with the following settings:

  • DEFAULT_VERSION: sets the version when no version is provided, defaults to None. default_version attribute on the view.
  • ALLOWED_VERSIONS: specifies the set of versions that are allowed, if not in the set it will raise an error. allowed_versions attribute on the view.
  • VERSION_PARAM: The parameter to use for versioning, defaults to version. version_param attribute on the view.

You have a few options for versioning classes:

  • AcceptHeaderVersioning: Version is passed in the Accept header
  • URLPathVersioning: Version is passed as part of the url structure
  • NamespaceVersioning: Similar to URLPathVersioning but uses url namespacing in Django. Take a look at how it differs from URLPathVersioning here
  • HostNameVersioning: Uses the hostname url to determine the version
  • QueryParameterVersioning: Uses a query parameter to determine the version

You can also create your own custom versioning scheme

How you deal with different versions in your code is up to you. One possible example is to just use different serializers:

def get_serializer_class(self):
    if self.request.version == 'v1':
        return BookSerializerV1
    return BookSerializerV2

Documentation

To generate documentation for your API you will have to generate an OpenAPI Schema. You will install pyyaml and uritemplate packages.

pip install pyyaml uritemplate

You can dynamically generate a schema with get_schema_view(), like so:

urlpatterns = [
    # ...
    # Use the `get_schema_view()` helper to add a `SchemaView` to project URLs.
    #   * `title` and `description` parameters are passed to `SchemaGenerator`.
    #   * Provide view name for use with `reverse()`.
    path('openapi', get_schema_view(
        title="Your Project",
        description="API for all things …",
        version="1.0.0"
    ), name='openapi-schema'),
    # ...

Going to http://127.0.0.1:8000/openapi should show you the full OpenAPI schema of your API.

You can customize how your schema is generated, to learn how to do that, check out the official documentation

You can set descriptions on your views that will then be shown in both the browsable API and in the generated schema. The description uses markdown. For example:

@api_view(['GET'])
def book_search(request):
   """
   The book search
   """
   …

For viewset and view based classes you have to describe the methods and actions:

class BookViewset(viewsets.ModelViewSet):
   """
   retrieve:
   Return the given book

   create:
   Create a new book.
   """
   queryset = Book.objects.all()
   serializer_class = BookSerializer

Final thoughts

As we saw above, Django REST Framework is an incredibly complex and all encompassing REST API framework. I tried to distill some of the main concepts into this guide, so that you can start working with the framework. The reality is that there is still a lot that I wasn’t able to cover in this guide as far as the types of customizations you can make, the options you can set, and the various classes for every type of scenario. If you ever get stuck, you can always reference the API Guide at https://www.django-rest-framework.org/.

The post Django REST Framework – A Complete Guide appeared first on QueWorx.

]]>
Software Development Basics For Non-tech Founders https://www.queworx.com/blog/software-development-basics-for-non-tech-founders/ Wed, 17 Jun 2020 20:18:32 +0000 https://www.queworx.com/?p=3291 As a non-technical founder of a startup, especially a tech one, at some point you will have to build some custom software. How do you do that? Who do you hire? Software is complicated and expensive, the wrong decisions early on can be very costly. In this article I’m going to go over some of …

Software Development Basics For Non-tech Founders Read More »

The post Software Development Basics For Non-tech Founders appeared first on QueWorx.

]]>
As a non-technical founder of a startup, especially a tech one, at some point you will have to build some custom software. How do you do that? Who do you hire? Software is complicated and expensive, the wrong decisions early on can be very costly. In this article I’m going to go over some of the higher level concepts and point you in the right direction to get started.

MVP

Photo by Senne Hoekman from Pexels

An MVP (Minimum Viable Product) is essentially the least amount of effort software you can build that lets you start engaging your users and learning something valuable. It’s a way of testing assumptions and getting some validated learning without spending a ton of effort doing the wrong things.

Originally, Eric Ries came up with that term in his Lean Startup methodology. While building products at his startups, Eric noticed that he was spending a massive amount of time building the wrong thing, only to then go back to the drawing board and throw out most of it. My experience has been very similar, both in the products that I’ve built and at the startups that I worked at. In fact, I would say that overbuilding in a vacuum is the most common mistake that I have witnessed at startups. Research seems to agree, CB Insights did a postmortem on 100 startups, and the number one reason that startups fail is that they fail to fill a market need. These startups spent months or years building products that no one actually needed.

To understand why that problem exists we have to fundamentally understand what a startup is and why it exists. The entire point of a startup is to discover a sustainable and scalable business model. It’s mainly a learning experience. That’s different to say a small business that sells burgers, where the business model has already been discovered. And so what happens is that the founders have some kind of an idea and a vision for their product. They go off and build it for many months without any input from users (or incorrect input from users). They get the software just perfect: fully featured, polished, maintainable, and scalable. Then they take it to market and find out that users don’t really want it, they either don’t adopt it or don’t want to pay for it. The disconnect is that the product is based on what the founders think the market needs instead of what the market actually says it needs, and those are two very different things. It’s very rare for founders to guess what the market needs right like that out of the gate. In fact, most companies go through multiple pivots. Uber originally was a limo sharing service, Twitter was a podcast subscription platform, Paypal was a PDA “beam” payment platform, etc.

Ultimately, what you have to remember is that the most beautiful, polished, full featured app that doesn’t solve a market need, still ends up in the trash can. So with an MVP, you figure out what is the minimum product that you can build to start testing and validating your assumptions with actual market feedback.

Development Models

Photo by Christina Morillo from Pexels

Now that you know what you need to build to get some learned feedback, let’s discuss the two main models for getting software developed: Fixed-bid vs Iterative, or Waterfall vs Agile.

Fixed-bid

With this model you know exactly what you want, you go to an agency, describe in detail what you are looking for, and they then build it for X number of months. It’s the same as Watefall if you had an in house team, you would describe what you want for version 1.0, and they would go off and build it. The main thing to understand is that with this model you are pre-planning your product a few months in advance.

Iterative

With iterative or Agile development, you frequently change what you want your developers working on based on changing priorities. In it’s most basic form, you are working closely with a developer and just meet/email them with changes that you want. In a more formal setting with a larger team it’s Agile with 2 weeks sprints or Kanban type of workflow.

What to Choose?

Just like we discussed above, this all comes down to the type of business that you are running. If you know exactly what you want to end up with, you can go with a fixed-bid project, where all the costs and times are mostly knowable upfront. If you are selling burgers and need a McDonalds type of restaurant, you get quotes and have builders build it for you.

But in my experience, fixed-bid is the wrong way to go for most early stage startups. As we discussed above, there is just too much shifting that goes on too frequently. You could potentially fixed-bid outsource your MVP, but since you are thinking up of all the possible features you will need up front, those MVPs end up being too bloated. And once the MVP is done, you will then need to switch to iterative mode anyway. An MVP doesn’t mean that your product is done any more than a child that enters first grade is done with school. An MVP is just a first step to get some validated learning, then it’s a continuous process to learn more until you finally get to a business model, and that can take years.

At one of my startups, initially the founder came to me to help review a proposal. It was a fixed bid proposal with an agency, 7-8 months of development at $400k. We ended up turning down the proposal, and it’s a good thing that we did. What we had at 7 months was nothing like what the founder wanted in the proposal. If he would have gone with the agency, he would been on the phone with them in one month asking to change half of the proposal.

This isn’t an absolute truth. I had someone come to me with a desktop application that they wanted to convert to a web based app. They still had documentation from the original project describing all the business logic and you could reference their existing software. If you can plan out your entire business in great detail and know for sure that it will not change during those months, fixed bid is the right option. But most startups are very dynamic and chaotic, and they need a dynamic development model to complement the business side.

The Developers

Photo by Startup Stock Photos from Pexels

So now that you know what to build and how to build it, the next step is to figure who should build it.

Agency

There are many software development companies to choose from. You can google for the specific type of a agency you are looking for, and use sites like https://clutch.co/ to read some reviews about them. Agencies typically like to do fixed bid projects, but they will generally go along with whatever you want – team building, outstaffing, etc. Agencies are typically more expensive and rigid than individual developers. For example, with a freelancer you can arrange 30 hour weeks, temporarily pause a project, or have them work extra hours when needed. With an agency the developers are just full time employees of the agency, they are interested in having them work a steady 40 hours a week, with a normal schedule and a familiar workflow process.

Freelancers

They are lower cost, more flexible, and deliver quicker overall. But you are also getting fewer guarantees. A freelancer might be great or might just waste a ton of your time and money. They are also less reliable, a freelancer might find a full time job or juggle too many projects and end up disappearing on you at any time.

You can find freelancers on https://www.upwork.com/ or sites like https://www.toptal.com/. Toptal is more expensive, but vets their freelancers for you.

In House Employees

They are reliable and have an interest in delivering high quality products, since they will be on the project long term. Costs overall will be similar to freelancers, it’s less per hour, but you have to pay benefits. You can also try to reduce the cost by giving away equity. The major downside with employees is lack of flexibility. You can ask a freelancer to reduce hours and wait, you can’t do that with an employee. It’s also much easier to let go of freelancers. An employee is a long term commitment.

Technical Co-Founder

If you are just starting you might consider bringing on a technical co-founder. It’s a really great choice for non-tech people and can really simplify software development for your startup. If the technical co-founder is good, they will be able to recruit other developers later on as well. You have to make sure to really vet them, a bad partnership is going to be bad with a technical co-founder, just like with anyone else. And of course, you will have to give up some of your company.

Who to Choose?

That again depends on your situation. Bringing on a great tech co-founder for a tech company is extremely valuable and will save you a lot of headache in the long term. But it’s risky, a bad co-founder can jeopardize your entire company, the 3rd reason on the list for startup failures is the wrong team. If your tech co-founder can’t deliver on the MVP, ends up being a bad team player, or (in one case I witnessed) is more committed to the code than the product, it’s going to end poorly. And again, it really depends on your situation, if you just have an idea and no funding, a tech co-founder might be a must, but if you are well capitalized you might not want to go that route.

In general, if I was in the shoes of a founder with an early stage startup, I would go with freelancers. It goes along with the rest of the article, there’s too much uncertainty and you need the flexibility early on. Agencies are rigid and more expensive, they are generally better for the fixed-bid model. In house employees are more stable and long term, but they require a commitment, which you can make when you have a much better understanding of how your business functions.

For example, eventually you will make connections with developers, have a company culture, and be able to hire the right in house developers for your business. But early on, you probably will not know how to vet developers correctly. With a freelancer, you can give them some milestones, some test assignments, watch how you work together and switch them out very quickly, if necessary. With in house developers you don’t have that flexibility, hiring full time is a serious, lengthy process.

Getting Started

Hopefully, I explained some of the basics enough for you to get started with software development for your startup. The next step is for you to define what you want to build. Figure out what assumptions you are making about your business and customers. Figure out what’s the quickest way for you to test those assumptions and start engaging your customers. If it’s custom software (it’s not always custom software), spec out exactly what you want built for your MVP. Then figure out who you want to hire and go from there….

The post Software Development Basics For Non-tech Founders appeared first on QueWorx.

]]>
A Simple Blog With Comments on Django: Development and Deployment for the Smallest Ones https://www.queworx.com/blog/a-simple-blog-with-comments-on-django-development-and-deployment-for-the-smallest-ones/ Mon, 09 Mar 2020 03:54:58 +0000 https://www.queworx.com/?p=2887 This article is intended for beginner web programmers and covers the development of a blog on Django using Twitter Bootstrap and its deployment on the free hosting provider PythonAnywhere. I tried to write this to be as transparent and straightforward as possible. For more experienced users, this article will not tell you anything new, and …

A Simple Blog With Comments on Django: Development and Deployment for the Smallest Ones Read More »

The post A Simple Blog With Comments on Django: Development and Deployment for the Smallest Ones appeared first on QueWorx.

]]>
This article is intended for beginner web programmers and covers the development of a blog on Django using Twitter Bootstrap and its deployment on the free hosting provider PythonAnywhere. I tried to write this to be as transparent and straightforward as possible. For more experienced users, this article will not tell you anything new, and some techniques may seem ineffective.

I assume that the reader is already familiar with Python syntax, has a minimal understanding of Django (it’s a good idea to start with tutorials at http://codeacademy.com on the appropriate topic and read a tutorial on Django), and also knows how to work on the command line.

So, let’s start by organizing the working environment on a local computer. In principle, any operating system that you feel comfortable in will work for our purposes. Here, I describe the process for GNU / Linux, for other systems the steps may differ slightly. The system must have virtualenv installed, a utility for creating an isolated working environment (so that the libraries we use do not interfere with other programs and projects).

Create and activate an environment:

mkdir ~/projects
cd ~/projects
virtualenv env
source env/bin/activate 

In Windows, the last command should be like this:

env\Scripts\activate

Install Django using the Python PIP package Manager.

pip install django

Create a new project. Let’s call it something original — for example, mysite.

django-admin.py startproject mysite && cd mysite

The script will work and create a mysite directory with another mysite directory and several *. py files inside. Use the script manage.py to create a django app named blog.

python manage.py startapp blog

Edit settings in the file mysite/settings.py (note: I mean ~/projects/mysite/mysite/settings.py) adding the following:

# coding: utf-8
import os

BASE_DIR = os.path.dirname(os.path.dirname(__file__))

In the first line, we specify the encoding in which we work, to avoid confusion and glitches, I suggest specifying it in all modified *. py files, changing them to be in UTF-8. BASE_DIR will store the full path to our project so that you can use relative paths for further configuration.

Let’s set up a database, in our project it is quite possible to use SQLite

DATABASES = { 'default':
    {
        'ENGINE': 'django.db.backends.sqlite3',
        'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
    }
}

Configure the time zone and language:

TIME_ZONE = 'Europe/Moscow'
LANGUAGE_CODE = 'ru-ru'

In order for Django to find out about the created app, add ‘blog’ to the INSTALLED_APPS tuple, and uncomment the ‘django’ string.contrib.admin’ to enable the built-in admin panel:

INSTALLED_APPS = (
    'django.contrib.auth',
    'django.contrib.contenttypes',
    'django.contrib.sessions',
    'django.contrib.sites',
    'django.contrib.messages',
    'django.contrib.staticfiles',
    'django.contrib.admin',
    'blog',
)

To make the admin panel work, edit mysite/urls.py

from django.conf.urls import patterns, include, url

from django.contrib import admin
admin.autodiscover()  #function that automatically discovers admin.py files in our apps

urlpatterns = patterns('',
    url(r'^admin/', include(admin.site.urls)), #URL of admin http://site_name/admin/
)

Create a model in blog/models.py

from django.db import models

class Post(models.Model):
    title = models.CharField(max_length=255) # the title of the post
    datetime = models.DateTimeField(u'Date of Publication') # date of publication
    content = models.TextField(max_length=10000) # the text of the post

    def __unicode__(self):
        return self.title

    def get_absolute_url(self):
        return "/blog/%i/" % self.id

Based on this model, Django will automatically create tables in the database.

Register it in the admin panel blog/admin.py

from django.contrib import admin
from blog.models import Post # our model from blog/models.py

admin.site.register(Post)

Create tables with the command:

python manage.py syncdb

When you first call this command, Django will ask to create a superuser, you should do that.

Start the debug server that Django provides:

python manage.py runserver

Now enter the url in the browser

http://localhost:8000/admin/

If everything went well, we should see this:

Go to the admin panel with the previously created username/password — now we can add and delete posts (buttons to the right of Posts)

Let’s create some posts for debugging.

Now let’s create a frontend. We need only two template pages — one with a list of all posts, the second – the content of the post.

Edit blog/views.py

from blog.models import Post 
from django.views.generic import ListView, DetailView

class PostsListView(ListView): # list presentation
    model = Post               # model for representation

class PostDetailView(DetailView): # detailed view of the model
    model = Post

Add this line to urlpatterns mysite/urls.py

url(r'^blog/', include('blog.urls')),

For all URLs starting with /blog/ to be processed using urls.py from the blog module, and create the file itself urls.py in the blog module with the following content:

#coding: utf-8
from django.conf.urls import patterns, url

from blog.views import PostsListView, PostDetailView 

urlpatterns = patterns('',
url(r'^$', PostsListView.as_view(), name='list'), # that is, with URL http://site_name/blog/
                                                  # a list of posts will be displayed
url(r'^(?P<pk>\d+)/$', PostDetailView.as_view()), # and with URL http://site_name/blog/number/
                                                  # a post with a specific number will be displayed

)

Now you need to create page templates. By default, for the PostListView class, Django will search for a template in blog/templates/blog/post_list.html (such a long and strange path is associated with the logic of the framework, the developer can change this behavior, but in this article, I won’t touch on this)

Let’s create this file:

{% block content %}
    {% for post in object_list %}
        <p>{{ post.datetime }}</p>
        <h2><a href="{{ post.get_absolute_url }}">{{ post.title }}</a></h2>
        <p>{{ post.content }}</p>
    {% empty %}
    <p>No Posts</p>
    {% endfor %}

{% endblock %}

Ok, let’s try how it works by going to the URL at http://localhost:8000/blog/. If there are no errors, we will see a list of posts where the title of each post is a link. For now these links lead nowhere, we need to fix that. By default, for the PostDetailView class, the template is located in blog\templates\blog\post_detail.html.

Let’s create it:

{% block content %}
    <p>{{ post.datetime }}</p>
    <h2>{{ post.title }}</h2>
    <p>{{ post.content }}</p>
{% endblock %}

And again check: http://localhost:8000/blog/1/

We will add the ability to comment on our post. for this purpose, we will use the DISQUS services, which we will install using pip

pip install django-disqus 

This module provides comments functionality with anti-spam protection, avatars, etc., and also takes care of comment storage:

Add to post_detail.html before {% endblock %}

<p>
    {% load disqus_tags %}
    {% disqus_dev %}
    {% disqus_show_comments %}
</p>

In INSTALLED_APPS, in settings.py add ‘disqus’

INSTALLED_APPS = (
    'django.contrib.auth',
    'django.contrib.contenttypes',
    'django.contrib.sessions',
    'django.contrib.sites',
    'django.contrib.messages',
    'django.contrib.staticfiles',
    'django.contrib.admin',
    'blog',
    'disqus',
)

And also add to settings.py

DISQUS_API_KEY = '***'
DISQUS_WEBSITE_SHORTNAME = '***'

The last two values are obtained by registering on http://disqus.com.

Test the project in the browser. Great, the functionality of our app is impressive, but we need to do something about the design. The easiest, and at the same time modern, option is to use Twitter Bootstrap.

Download the archive http://twitter.github.io/bootstrap/assets/bootstrap.zip and unzip it to the static directory of our project (I mean ~/projects/mysite/static – create it)

Edit settings.py so that Django knows where to look for static pages.

STATICFILES_DIRS = (
    os.path.join(BASE_DIR, 'static'),
)

Create a blog/templates/blog/base.html with the following content

<!DOCTYPE html>
<html lang="ru">
    <head>
        <meta charset="utf-8" />
        <title>MyBlog</title>
        <link href="{{STATIC_URL}}bootstrap/css/bootstrap.css" rel="stylesheet">
        <style>
            body {
                padding-top: 60px; /* 60px to make the container go all the way to the bottom of the topbar */
            }
        </style>
        <link href="{{STATIC_URL}}bootstrap/css/bootstrap-responsive.css" rel="stylesheet">
        <!--[if lt IE 9]>
        <script src="http://html5shim.googlecode.com/svn/trunk/html5.js"></script>
        <![endif]-->
        <script src="{{STATIC_URL}}bootstrap/js/bootstrap.js" type="text/javascript"></script>
        {% block extrahead %}
        {% endblock %}
        <script type="text/javascript">
        $(function(){
        {% block jquery %}
        {% endblock %}
        });
        </script>
    </head>
<body>

<div class="navbar navbar-inverse navbar-fixed-top">
    <div class="navbar-inner">
        <div class="container">
            <div class="brand">My Blog</div>
            <ul class="nav">
                <li><a href="{% url 'list' %}" class="">List of posts</a></li>
            </ul>
        </div>
    </div>

</div>

<div class="container">
     {% block content %}Empty page{% endblock %}
</div> <!-- container -->

</body>
</html>

This is the basic template for our pages, include it in our post_list.html and post_detail.html by adding this first line into them

{% extends 'blog/base.html' %}

Check that everything works. Now that the design is set, you can start deploying the app on a free cloud hosting service.

Register a free N00b account on PythonAnywhere. I like this service for ease of installation of Django. Everything happens almost the same as on the local computer.

Let’s say we created a user in PythonAnywhere with the name djangotest, then our application will be located at djangotest.pythonanywhere.com. Note: replace ‘djangotest’ with your PythonAnywhere username everywhere in the text below.

Change in settings.py

DEBUG = False

and add

ALLOWED_HOSTS = ['djangotest.pythonanywhere.com']

Upload files to the host in any of the possible ways.

In my opinion, for an inexperienced user, the easiest way is to archive the project folder, upload the archive to the server (in the Files->Upload a file section) and unzip it on the server using the command in the bash shell (in the Consoles -> bash Section):

For example, if we upload mysite.tar.gz, run this in the PythonAnywhere console

tar -zxvf mysite.tar.gz

Now we configure the working environment on the server, run this in the PythonAnywhere console:

virtualenv env

source env/bin/activate

pip install django django-disqus

Configure static pages in the Web -> Static files section:

The first line — the place where bootstrap is, in the second – static files of the built-in Django admin panel.

Configure WSGI (Web -> It is configured via a WSGI file stored at: …):

activate_this = '/home/djangotest/env/bin/activate_this.py'
execfile(activate_this, dict(__file__=activate_this))

import os
import sys

path = '/home/djangotest/mysite'
if path not in sys.path:
    sys.path.append(path)
os.environ['DJANGO_SETTINGS_MODULE'] = 'mysite.settings'

import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()

Click the button Web -> Reload djangotest.pythonanywhere.com

Go to djangotest.pythonanywere.com/blog / – congratulations, it wasn’t easy, but You did it. Now You have your own cozy blog, developed with your own hands on the most modern web technologies!


Written by Станислав Фатеев, translated from https://habr.com/en/post/181556/

The post A Simple Blog With Comments on Django: Development and Deployment for the Smallest Ones appeared first on QueWorx.

]]>
Sending Emails Using asyncio and aiohttp From a Django Application https://www.queworx.com/blog/sending-emails-using-asyncio-and-aiohttp-from-a-django-application/ Tue, 28 Jan 2020 19:25:20 +0000 http://www.queworx.com/?p=2610 Hi everyone! I develop and support the notification service at Ostrovok.ru. The service is written in Python3 and Django. In addition to transactional emails, push notifications, and messages, the service also takes care of mass emailing of marketing offers (not spam! trust me, unsubscribe works better than subscribe on our service) for users who have …

Sending Emails Using asyncio and aiohttp From a Django Application Read More »

The post Sending Emails Using asyncio and aiohttp From a Django Application appeared first on QueWorx.

]]>
Hi everyone!

I develop and support the notification service at Ostrovok.ru. The service is written in Python3 and Django. In addition to transactional emails, push notifications, and messages, the service also takes care of mass emailing of marketing offers (not spam! trust me, unsubscribe works better than subscribe on our service) for users who have given their consent. Over time, the database of active recipients grew to more than a million email addresses, which the email service was not ready for. I want to talk about how new Python features allowed us to speed up mass emailing and save resources, and what problems we encountered when working with them.

The original implementation

Initially, we implemented mass emailing with the simplest solution: for each recipient, a task was place in a queue, where one of 60 workers (a feature of our queues is that each workers runs in a separate process) prepared the context, rendered the template, sent an HTTP request to Mailgun to send the email, and created a record in the database that the email was sent. The entire process took up to 12 hours, sending about 0.3 emails per second from each worker and blocking emails for small campaigns.

Asynchronous solution

Quick profiling showed that workers spent a large amount of time on setting up connections with Mailgun, so we started grouping tasks into chunks, one chunk for each worker. Workers began using a single connection with Mailgun, which dropped the time of emailing the list to 9 hours, each worker sending an average of 0.5 emails per second. Subsequent profiling again showed that network requests still took the majority of the time, which led us to the idea of using asyncio.

Before putting all the processing in an asyncio loop, we had to solve several problems:

  1. Django ORM is not yet able to work with asyncio, although it releases GIL during query execution. This means that queries to the database can be executed in a separate thread and not block the main loop.
  2. Current versions of aiohttp require Python versions 3.6 and higher, which required updating the Docker image at the time of implementation. Experiments on older versions of aiohttp and Python 3.5 have shown that the sending speed on these versions is much lower than on newer versions, and is comparable to sequential sending.
  3. Storing a large number of asyncio coroutines quickly consumes all the memory. This means that you can’t prepare all the coroutines for emails in advance and call a loop to process them, you have to prepare data as you send already generated emails.

Taking all this into account, we will create our own asyncio loop inside each of the workers with the ThreadPool type of pattern consisting of:

  • One or more producers working with the database via Django ORM in a separate thread via asyncio.ThreadPoolExecutor. The producer tries to aggregate data requests into small batches, renders templates for the data via Jinja2, and the emailing data to the task queue.
def get_campaign_send_data(ids: Iterable[int]) -> Iterable[Mapping[str, Any]]:
    """ We generate email data, here we work with Django ORM and template rendering."""
    return [{'id': id} for id in ids]


async def mail_campaign_producer(ids: Iterable[int], task_queue: asyncio.Queue) -> None:
    """
    We group recipients into subchannels and generate data for them to send, 
    which we place in the queue. Data generation requires working with the 
    database, so we perform it in ThreadPoolExecutor.
    """

    loop = asyncio.get_event_loop()
    total = len(ids)
    for subchunk_start in range(0, total, PRODUCER_SUBCHUNK_SIZE):
        subchunk_ids = ids[subchunk_start : min(subchunk_start + PRODUCER_SUBCHUNK_SIZE, total)]
        send_tasks = await loop.run_in_executor(None, get_campaign_send_data, subchunk_ids)
        for task in send_tasks:
            await task_queue.put(task)
  • Several hundred email senders are asyncio coroutines that read data from the task queue in an infinite loop, send network requests for each of them, and add the result (response, or exception) to the report queue.
async def send_mail(data: Mapping[str, Any], session: aiohttp.ClientSession) -> Union[Mapping[str, Any], Exception]:
    """ Sending a request to an external service."""
    async with session.post(REQUEST_URL, data=data) as response:
        if response.status_code != 200:
            raise Exception
    return data


async def mail_campaign_sender(
    task_queue: asyncio.Queue, result_queue: asyncio.Queue, session: aiohttp.ClientSession
) -> None:
    """
    Getting data from the queue and sending network requests.
    Don't forget to call task_done so that the calling code will know when 
    the email is sent.. 
    """

    while True:
        try:
            task_data = await task_queue.get()
            result = await send_mail(task_data, session)
            await result_queue.put(result)
        except asyncio.CancelledError:
            # Correctly processing cancellation of the coroutine
            raise
        except Exception as exception:
            # Processing errors in email sending
            await result_queue.put(exception)
        finally:
            task_queue.task_done()
  • One or several workers, grouping data from the queue and writing results to the database using a bulk request.
def process_campaign_results(results: Iterable[Union[Mapping[str, Any], Exception]]) -> None:
    """We process the results of transmission: exception and success and write them to the database"""
    pass


async def mail_campaign_reporter(task_queue: asyncio.Queue, result_queue: asyncio.Queue) -> None:
    """
    We group reports into a list and pass them to ThreadPoolExecutor for processing, to write emailing results to the database.
    """
    loop = asyncio.get_event_loop()
    results_chunk = []
    while True:
        try:
            results_chunk.append(await result_queue.get())
            if len(results_chunk) >= REPORTER_BATCH_SIZE:
                await loop.run_in_executor(None, process_campaign_results, results_chunk)
                results_chunk.clear()
        except asyncio.CancelledError:
            await loop.run_in_executor(None, process_campaign_results, results_chunk)
            results_chunk.clear()
            raise
        finally:
            result_queue.task_done()
  • A task queue, of instance asyncio.Queue, limited to a maximum number of items, so that the producer doesn’t overfill it, taking all the memory.
  • A report queue is also an instance of asyncio.Queue with a limit on the maximum number of items.
  • An asynchronous method that creates queues, workers, and finish transmission when they are stopped .
async def send_mail_campaign(
    recipient_ids: Iterable[int], session: aiohttp.ClientSession, loop: asyncio.AbstractEventLoop = None
) -> None:
    """
    Creates a queue and starts workers for processing.
    Waits for recipients to be generated, then waits for reports to be sent and saved. 
    """
    executor = ThreadPoolExecutor(max_workers=PRODUCERS_COUNT + 1)
    loop = loop or asyncio.get_event_loop()
    loop.set_default_executor(executor)

    task_queue = asyncio.Queue(maxsize=2 * SENDERS_COUNT, loop=loop)
    result_queue = asyncio.Queue(maxsize=2 * SENDERS_COUNT, loop=loop)

    producers = [
        asyncio.ensure_future(mail_campaign_producer(recipient_ids, task_queue)) for _ in range(PRODUCERS_COUNT)
    ]
    consumers = [
        asyncio.ensure_future(mail_campaign_sender(task_queue, result_queue, session)) for _ in range(SENDERS_COUNT)
    ]
    reporter = asyncio.ensure_future(mail_campaign_reporter(task_queue, result_queue))

    # We are waiting for all the letters to be prepared
    done, _ = await asyncio.wait(producers)

    # When all sends are completed, we stop the workers
    await task_queue.join()
    while consumers:
        consumers.pop().cancel()

    # When report saving is complete, we also stop the corresponding worker
    await result_queue.join()
    reporter.cancel()
  • Synchronous code that creates a loop and starts the distribution
async def close_session(future: asyncio.Future, session: aiohttp.ClientSession) -> None:
    """
    Close the session when all processing is complete.
    The aiohttp documentation recommends adding a delay before closing the session. 

    """
    await asyncio.wait([future])
    await asyncio.sleep(0.250)
    await session.close()


def mail_campaign_send_chunk(recipient_ids: Iterable[int]) -> None:
    """
    Entry point for starting a mailing list.
    Accepts recipient IDs, creates an asyncio loop, and starts the sending coroutine.

    """
    loop = asyncio.new_event_loop()
    asyncio.set_event_loop(loop)

    # Session
    connector = aiohttp.TCPConnector(limit_per_host=0, limit=0)
    session = aiohttp.ClientSession(
        connector=connector, auth=aiohttp.BasicAuth('api', API_KEY), loop=loop, read_timeout=60
    )

    send_future = asyncio.ensure_future(send_mail_campaign(recipient_ids, session, loop=loop))
    cleanup_future = asyncio.ensure_future(close_session(send_future, session))
    loop.run_until_complete(asyncio.wait([send_future, cleanup_future]))
    loop.close()

After implementing this solution, the time for sending mass emails was reduced to an hour with the same volume of emails and 12 workers involved. That is, each worker sends 20-25 emails per second, which is 50-80 faster than the original solution. The memory consumption of the workers remained at the same level, the processor load increased slightly, and the network utilization increased by many times, which is the expected effect. The number of connections to the database has also increased since each of the threads of production workers and workers who save reports is actively working with the database. At the same time, the released workers can send out smaller emailing lists while the mass campaign is being sent.

Despite all the advantages, this implementation has a number of issues that must be taken into account:

  • You must be careful when handling errors. An unhandled exception may terminate the worker, causing the campaign to fail.
  • When sending is completed, you must not lose reports on recipients who went to the chunk towards the end, and save them to the database.
  • The logic of forcibly stopping-resuming of campaigns becomes more complicated because after stopping the sending workers, it is necessary to compare which recipients were sent emails and which ones were not.
  • After a while, the Mailgun support staff contacted us and asked us to reduce our speed, because mail services start temporarily rejecting emails if the frequency of sending them exceeds the threshold. This is easy to do by reducing the number of workers.
  • You would not be able to use asyncio if some of the stages of sending emails were performing CPU-intensive operations. Rendering templates using jinja2 was not a very resource intensive operation and has almost no effect on the speed of sending.
  • Using asyncio for emailing requires that the mail queue handlers are started by separate processes.

I hope our experience will be useful to you! If you have any questions or ideas, please write in the comments!


Written by Sergey, translated from here

The post Sending Emails Using asyncio and aiohttp From a Django Application appeared first on QueWorx.

]]>
Hiring Developers. Tips From a Developer https://www.queworx.com/blog/hiring-developers-tips-from-a-developer/ Fri, 17 Jan 2020 21:23:33 +0000 http://www.queworx.com/?p=2453 I have already come across several articles about hiring developers and read them with some interest, because I am a developer myself, and I was curious to know how we are evaluated at interviews. My impressions? I’m sad… Almost all of the articles, in my opinion, remind me of “bad advice”. Just a warning, the …

Hiring Developers. Tips From a Developer Read More »

The post Hiring Developers. Tips From a Developer appeared first on QueWorx.

]]>
I have already come across several articles about hiring developers and read them with some interest, because I am a developer myself, and I was curious to know how we are evaluated at interviews.

My impressions? I’m sad… Almost all of the articles, in my opinion, remind me of “bad advice”.

Just a warning, the whole article is a purely personal opinion, but supported by developer friends and colleagues.

And so…

The first meeting, a job interview without a technical expert

HRs don’t kid yourself. You will never understand how good a developer is…

Unless, you can stick electrodes in their ear and start end-to-end testing… But since there is no such technology, all you can assess is the adequacy and, at least in part, the motivation of the person sitting in front of You.

And believe me, that’s enough.

After all, your task is to find a person who can join the team, work productively in it and have his work rewarded in the way that he expects, and that your company can provide (money, recognition, exciting projects, etc.).

All attempts to ask some technical nuances will look inappropriate and helpless. Personally, I am very much annoyed when I am asked about something that they do not understand themselves. I just want to get up and leave.

What else can you ask at the first stage? It depends on the specifics of the job.

If you need an experienced person — ask about the experience, find out what problems they solved, what difficulties they overcame.

If you need a person, who can be trained, give them a couple of logical tasks, check the performance of their brain. The information collected in the first stage will be enough to screen out 80% – 90% of candidates.

Part two. Interview with a technical specialist

DO NOT ASK THEORY outside the context of the practical experience of a particular developer!

Personally, I know several people who studied with me as developers. They had all the theoretical stuff bouncing off their teeth, but when it came to real programming, they couldn’t do anything useful.

What, in my opinion, should I ask the candidate?

Ask for technical nuances from their previous experience, especially those that overlap with future work.

By the way the person talks about it, it will be clear:

  • Whether they really understands the issue or just came up with things to increase their price
  • How much of their experience and knowledge is suitable for the current job
  • Will they be able to cope with future work
  • Will they be able to learn if they don’t have the required experience

And I think that’s enough to make the final choice.

You can only learn more about the person during the trial period.

I hope this material will be useful to someone, thank you for your attention.


Written by Константин, translated from here

The post Hiring Developers. Tips From a Developer appeared first on QueWorx.

]]>
Why You Should Try FastAPI https://www.queworx.com/blog/why-you-should-try-fastapi/ Mon, 13 Jan 2020 18:23:33 +0000 http://www.queworx.com/?p=2388 FastAPI — a relatively new web framework written in the Python programming language for creating a REST (and if you try really hard, then GraphQL) API, based on new features of Python 3.6+, such as: type-hints, native synchronicity (asyncio). Among other things, FastAPI tightly integrates with OpenAPI-schema and automatically generates documentation for your API via …

Why You Should Try FastAPI Read More »

The post Why You Should Try FastAPI appeared first on QueWorx.

]]>
image
Logo is taken from the GitHub repository of FastAPI

FastAPI — a relatively new web framework written in the Python programming language for creating a REST (and if you try really hard, then GraphQL) API, based on new features of Python 3.6+, such as: type-hints, native synchronicity (asyncio). Among other things, FastAPI tightly integrates with OpenAPI-schema and automatically generates documentation for your API via Swagger and ReDoc.

FastAPI is based on Starlette and Pydantic.
Starlette — an ASGI micro framework for writing web applications.
Pydantic — a library for parsing and validating data based on Python type-hints.

What do people say about FastAPI?

“[…] I’m using fastapi a ton these days. […] I’m actually planning to use it for all of my team’s ML services at Microsoft. Some of them are getting integrated into the core Windows product and some Office products.”

Kabir Khan — Microsoft (ref)

“If you’re looking to learn one modern framework for building REST APIs, check out FastAPI. […] It’s fast, easy to use and easy to learn. […]”

“We’ve switched over to FastAPI for our APIs […] I think you’ll like it […]”

Ines Montani — Matthew Honnibal — Explosion AI founders — spaCy creators (ref) — (ref)

Minimal API created with FastAPI

I will try to show you how to create a simple but useful API with documentation for developers. We will write a random phrase generator!

Installation of necessary components

pip install wheel -U
pip install uvicorn fastapi pydantic

New module!
Uvicorn — this is an ASGI-compatible web server that we will use to run our application.

First, let’s create the basis of our application.

from fastapi import FastAPI

app = FastAPI(title="Random phrase")

This app already works and can be started.
Run the following command in your terminal and open the page in the browser at the address http://127.0.0.1:8000/docs.

uvicorn <your filename>:app

But so far, our app doesn’t have any endpoints — let’s fix that!

Database

Since we’re writing a random phrase generator, we obviously have to store the phrases somewhere. For that, I chose a simple python-dict.

Let’s create the file db.py and start writing code.

Import the necessary modules:

import typing
import random
from pydantic import BaseModel
from pydantic import Field

Then – we will designate two models: the input phrase (the one that the user will send to us) and the “output” (the one that we will send to the user).

class PhraseInput(BaseModel):
    """Phrase model"""

    author: str = "Anonymous"  # author name. If not passed, the standard value is used.
    text: str = Field(..., title="Text", description="Text of phrase", max_length=200)  # The text of the phrase. The maximum value is 200 characters.

class PhraseOutput(PhraseInput):
    id: typing.Optional[int] = None  # ID of phrases in our database.

After that, we will create a simple class to work with the database:

class Database:
    """
    Our **fake** database.
    """

    def __init__(self):
        self._items: typing.Dict[int, PhraseOutput] = {}  # id: model

    def get_random(self) -> int:
        # Getting a random phrase id
        return random.choice(self._items.keys())

    def get(self, id: int) -> typing.Optional[PhraseOutput]:
        # Getting a phrase by id
        return self._items.get(id)

    def add(self, phrase: PhraseInput) -> PhraseOutput:
        # Adding a phrase

        id = len(self._items) + 1
        phrase_out = PhraseOutput(id=id, **phrase.dict())
        self._items[phrase_out.id] = phrase_out
        return phrase_out

    def delete(self, id: int) -> typing.Union[typing.NoReturn, None]:
        # Deleting a phrase

        if id in self._items:
            del self._items[id]
        else:
            raise ValueError("Phrase doesn't exist")

Now we can start writing the API itself.

API

Let’s create the file main.py and import the following modules:

from fastapi import FastAPI
from fastapi import HTTPException
from db import PhraseInput
from db import PhraseOutput
from db import Database

Initialize our application and database:

app = FastAPI(title="Random phrase")
db = Database()

And let’s write a simple method for getting a random phrase!

@app.get(
    "/get",
    response_description="Random phrase",
    description="Get random phrase from database",
    response_model=PhraseOutput,
)
async def get():
    try:
        phrase = db.get(db.get_random())
    except IndexError:
        raise HTTPException(404, "Phrase list is empty")
    return phrase

As you can see, I also specify some other values in the decorator to generate pretty documentation 🙂 You can look at all the possible parameters in the official documentation.

In this piece of code, we try to get a random phrase from the database, and if the database is empty, we return an error with the code 404.

Similarly, we write other methods:

@app.post(
    "/add",
    response_description="Added phrase with *id* parameter",
    response_model=PhraseOutput,
)
async def add(phrase: PhraseInput):
    phrase_out = db.add(phrase)
    return phrase_out

@app.delete("/delete", response_description="Result of deletion")
async def delete(id: int):
    try:
        db.delete(id)
    except ValueError as e:
        raise HTTPException(404, str(e))

That’s all! Our small but useful API is ready!

Now we can launch the app using uvicorn, open the online documentation (http://127.0.0.1/docs), and try our API!

Useful material

Of course, I couldn’t tell you about all the features of FastAPI, such as: smart DI system, middlewares, cookies, standard authentication methods in the API (jwt, oauth2, api-key) and much more!

But the purpose of this article is not so much to review all the features of this framework, but rather to encourage you to explore it yourself. FastAPI has excellent documentation with a bunch of examples.

Code from the Github article
Official documentation
Repository on Github


Written by prostomarkeloff, translated from here

For additional information check out this tutorial on how to build a high performing app in FastAPI from Toptal.

The post Why You Should Try FastAPI appeared first on QueWorx.

]]>
React Native – A Silver Bullet for All Problems? How We Choose a Cross-Platform Tool for Profi.ru https://www.queworx.com/blog/react-native-a-silver-bullet-for-all-problems-how-we-choose-a-cross-platform-tool-for-profi-ru/ Tue, 07 Jan 2020 20:51:13 +0000 http://www.queworx.com/?p=2344 Hello, my name is Gevorg. I’m Head of Mobile in Profi.ru. I Want to share with you the story of our experiment with React Native. I will tell you how we evaluated the pros and cons of development in React Native – in theory and in practice. This article will be useful for those who …

React Native – A Silver Bullet for All Problems? How We Choose a Cross-Platform Tool for Profi.ru Read More »

The post React Native – A Silver Bullet for All Problems? How We Choose a Cross-Platform Tool for Profi.ru appeared first on QueWorx.

]]>
Hello, my name is Gevorg. I’m Head of Mobile in Profi.ru. I Want to share with you the story of our experiment with React Native. I will tell you how we evaluated the pros and cons of development in React Native – in theory and in practice. This article will be useful for those who are interested in cross-platform mobile development but have not yet decided whether to go in that direction or not.

Maximum Acceleration

It all started with our decision to speed up development by 10 times at our company. We set an impossible goal to go beyond our familiar surroundings and try new things. All the development teams at Profi.ru took on experiments. At that time, the company had 13 native mobile developers, including two team leaders and me. My team worked on two mobile apps. In the first, clients are looking for specialists; in the second – specialists are looking for clients. For me, this period was incomprehensible and emotionally stressful. I feel like we already did enough to make everything work quickly.

We used a common architecture throughout projects and kept the code clean. We used generators to create all the module files. We tried to move all the business logic to the backend. We set up CI/CD and covered the applications with end-to-end tests. Because of all this, some apps were released steadily once a week. I had no idea how to speed up development even by two times. How can we possibly do 10? And so, we wrote down what is important to us.

  1. A unified codebase. I wanted all our mobile developers to write the same code. In the same language, without platform type differences for iOS and Android. That way we were able to speed up development by two times.
  2. Ease of learning a new tool. So that when we expand the team, we don’t have any problems with hiring or retraining.
  3. Quick releases. So that we can release not once a week, but every day.
  4. Instant updates. So that all users receive updates at the same time. Same as what’s now happening in web development.

After a little research, we settled on three candidates: React Native, Flutter, Kotlin / Native. Neither Flutter nor Kotlin Native can be released quickly. And we thought that was probably the most important thing on our list. Also, those technologies were quite raw at the time. We settled on React Native — we can release instantly on it. Plus, most of our developers have already used React.

In General, I had a negative attitude towards cross-platform tools – like most native Mobile developers. Go to any Mobile conference and talk about it — you will immediately be pelted with stones. I like to do that myself: -) So to confirm or refute our concerns, we conducted our own investigation.

Pros, risks, and issues

We have studied examples of React Native use in various companies -successful and not so much. With our head of development, Boris Egorov, we carefully read more than three dozen articles and other docs. In some, we discussed every paragraph. At the end of the article – we looked at the most interesting links. We noted things that can speed us up, possible risks and issues. After that, we talked to developers from three companies. In each, the guys created a mass product and worked with React Native for at least a year.

The pros were pretty obvious..

  1. A unified codebase.
  2. Over-the-Air – updates over the air, bypassing the app stores.
  3. From the first two points, it followed that the speed of delivery of features to users will increase.
  4. Web developers will be able to write code for mobile applications. If a web developer knows React well, they can quickly learn React Native. And if you are a mobile developer who already knows this framework, you can relatively quickly get into web development.

The list of risks was longer 🙂

The first risk. Instead of one platform, we have to support three in the long run: Android, iOS, and React Native.

Sometimes the developer screen looks something like this:

Reality. One of the developers we talked to was implementing React Native into existing code. Yes, there is a full-fledged third framework, but you don’t get away from native development. His team had to synchronize the state between React Native and native code. This involved a lot of switching between different parts of the code / different paradigms and IDEs. So they decided to write a new project from scratch, create a framework on React Native, and insert already made native pieces where they need them. It got better.

The second risk. React Native Black Box – sometimes there are situations when the developer does not understand what caused a bug. You have to search everywhere: in the React Native code, in the native part of the product, or in the React Native framework itself.

Reality. The guys we talked to were putting logs and different tools on the app: Crashlytics, Kibana, and so on. Problems remain, but it becomes clearer where they occur.

The third risk. In the articles, it was often mentioned that React Native is suitable for small projects, but not for large products with platform functionality.

Reality. We looked at the market to see if any big companies work with React Native. Turns out there are dozens, if not hundreds. Including Skype, Tesla, Walmart, Uber Eats and “Кухня на районе.”

The fourth risk. The project may break with any operating system update from Apple or Google.

Reality. We decided the risk was acceptable. The same risk exists for native development. When a new OS for iOS and Android comes out, you adapt your app to it.

The fifth risk. There is no support for a 64-bit system in Android, and the issue has been open since 2015. And since August 2019, Google Play has not accepted assemblies that support only 32-bit systems.

Reality. We looked at an issue that the React Native team was working on in the summer of 2018. They promised to add support in the next release, although they still haven’t fully fixed 64-bit system support. It was very upsetting. Support was then added, but some Android devices fail after the transition. As we found out later, the percentage is insignificant, but it was still the most painful point for me.

The sixth risk. The likelihood that tomorrow Apple or Google will release a new version of their OS and break React Native. Or a new technology that Profi.ru can’t support.

Reality. There are no guarantees for many other companies or for us. You either realize the risk and work with it, or you try something else. We decided to work with it, and we decided to solve all the problems as they came in.

The seventh risk. We could not tell in advance how fast React Native would be compared to a native application and what performance we could expect.

Reality. A verbatim quote from one of our conversations – “when scrolling, lists of elements of variable height slowed down.” We decided to test it in practice. Moving a little ahead – at the time of writing the first prototype of the application, we did not see that problem, but when developing a full-fledged application, there were many questions about the performance of React Native.

The eighth risk. It’s not clear how quickly we can find React Native developers. On HeadHunter, I saw about 300 resumes, even though there were more than 150 thousand developers for iOS.

Reality. We didn’t go too much into it, as we had already hired React developers many times and knew what to look for. We decided that as a last resort, we can retrain React-developers in React Native.

There was also a risk that someone would leave the team, as mobile developers do not like this technology. I was right, by the way. Someone’s gone 🙁

What we change and what we don’t

We discussed the results of the investigation with the company’s founders Sergey Kuzneсov and Yegor Rudy and got the go-ahead to conduct the experiment.

We decided to create a new product from scratch instead of inserting it into an existing one. Also, we didn’t want to touch our ‘finding client’ app. It was quite finished, and economically it did not make sense to change something radically. Also, it was essential for us that the client application had its own native experience for both iOS and Android.

We wanted to change the app for finding specialists drastically. In contrast to the client app, we did not mind that the specialists will have the same experience of interaction for iOS and Android applications. Plus, we believe that the product for specialists can do without animation and visual effects. But before switching the whole team to the new technology, it was necessary to feel out how it works.

The experiment in action

In December 2018, we assembled a team of three people. Two React-developers and one native developer, me. I understand how Android works and am well versed in iOS development.

As part of the experiment we wanted to check the following items:

  • How instant releases work in React Native
  • How the interaction between native components and React Native works
  • Can we use our native components
  • How React Native works with the camera, pushes and deep links
  • How navigation and state saving works in React Native
  • How much can we do with React Native pixel perfect
  • How automatic testing works in React Native
  • How quickly a native or React developer can learn the technology

We got the first results within a month and a half after diving into development.

  • I started writing code using React Native in two weeks. For me, the technology was quite simple. One of our React developers helped me a lot – he taught me about React/Redux and Javascript in general. It was necessary to get into the subtleties of React/Redux, but after a while “the neural network began to learn”, as they say in our company 🙂
  • I was pleasantly surprised that Javascript + Flow gives strict typing in some ways. For Javascript, I had much lower expectations. At the same time, I would definitely prefer Swift and Kotlin: they are much more beautiful and pleasant for me than Javascript, but here the main words are “for me.”
  • It helped that the team had developers who can write code for iOS, Android, and React. Each platform had its own specific problems. To solve them, the team must be cross-functional.
  • Instant releases work. It’s like magic to me. It is not necessary to wait for releases and approvals from Apple. Want to push a release, just take it and push.
  • Very frequently the project broke. It’s really not cool. You take changes from the branch, try to run – and nothing happens. It was very annoying. At some point, we just wrote a script that cleans the project completely. We can’t say that we solved the whole problem, but we solved most of it
  • We still have to work with three platforms, despite the fact that we mostly write code in React Native. All developers had three IDE’s: Xcode, Android Studio, WebStorm.
  • Pushes, deeplinks, camera, navigation are launching. But they are started either with the help of third-party libraries, or libraries in native code should be written by us, and then connected to React Native.

I want to go back to the title. So is React Native a silver bullet for all problems? We decided for ourselves that no, it isn’t. At the same time, we got what we wanted. We have increased speed of delivery of features several times and now we can release to all users every day. It is also important that the company has cross-functional teams, where each developer writes code, both in Android/iOS and on the web.

And yes, the apps are in stores 🙂

Useful articles about React Native

  1. Why Discord is Sticking with React Native — Fanghao (Robin) Chen
  2. Как я полюбил и возненавидел React Native — Андрей Мелихов
  3. React Native с точки зрения мобильного разработчика — Андрей Константинов
  4. React Native at Instagram — Instagram Engineering
  5. React Native: A retrospective from the mobile-engineering team at Udacity — Nate Ebel
  6. React Native: батл по фактам в одном действии — Samat Galimov
  7. Sunsetting React Native — Gabriel Peal

Written by Gevorg Petrosian, translated from here

The post React Native – A Silver Bullet for All Problems? How We Choose a Cross-Platform Tool for Profi.ru appeared first on QueWorx.

]]>
Getting Started with Interactive Brokers API in Java https://www.queworx.com/blog/getting-started-with-interactive-brokers-api-in-java/ Sun, 05 Jan 2020 20:48:01 +0000 http://www.queworx.com/?p=2275 This tutorial will show you how to do some basic things with the Interactive Brokers API using Java, the code for everything in this tutorial can be found here. First download and install Trader Workstation from the interactive brokers site – here. Then grab the API from here. You are just looking for the TwsApi.jar …

Getting Started with Interactive Brokers API in Java Read More »

The post Getting Started with Interactive Brokers API in Java appeared first on QueWorx.

]]>
This tutorial will show you how to do some basic things with the Interactive Brokers API using Java, the code for everything in this tutorial can be found here.

First download and install Trader Workstation from the interactive brokers site – here.

Then grab the API from here. You are just looking for the TwsApi.jar from that package, so you can add it to your project.

You’ll also want to start TWS, go into configurations -> API -> Settings and check Enable Active X and Socket client. Take note of the socket port as well, you will need it later.

Broker

We’ll start by adding a broker class to wrap all the Interactive Brokers API code, this is how our application will call IB. Let’s start by adding a connect() and disconnect() function, so your class should start like this:

(IBBroker.java)

import com.ib.client.EClientSocket;

public class IBBroker {
    private EClientSocket __clientSocket;

    public IBBroker(EClientSocket clientSocket, IBDatastore ibDatastore) {
        __clientSocket = clientSocket;
        __ibDatastore = ibDatastore;
    }

    public void connect() {
        // ip_address, port, and client ID. Client ID is used to identify the app that connects to TWS, you can
        // have multiple apps connect to one TWS instance
        __clientSocket.eConnect("127.0.0.1",7497, 1);
    }

    public void disconnect() {
        __clientSocket.eDisconnect();
    }
}

Getting Messages from Interactive Brokers

To get messages/data from Interactive Brokers we have to implement their EWrapper interface.

(IBReceiver.java)

import com.ib.client.*;

import java.util.Set;

public class IBReceiver implements EWrapper {
    @Override
    public void tickPrice(int i, int i1, double v, int i2) {
        
    }

    .......
}

There are going to be lots of methods that we have to override, but technically we don’t have to fill out any of them, since they are all void. We definitely want to implement the error() functions, since we want to know when something goes wrong. Other than that, we will just implement functions as we need them.

Datastore

We also want to add a data store class that will hold all the data that comes back or we set for IB. That way IBBroker and IBReceiver will be able to use the same data, plus you can pass this data store to any other class and they don’t have to know about IBBroker or IBReceiver.

(IBDatastore.java)

import java.util.HashMap;

public class IBDatastore {

    public Integer nextValidId = null;

    private HashMap<Integer, Tick> __ticks = new HashMap<Integer, Tick>();

    public Tick getLatestTick(int symbolId) {
        return __ticks.get(symbolId);
    }
}

Wiring it up

And finally we tie everything together so that everything is connected:

(Main.java)

package com.queworx;

import com.ib.client.EClientSocket;
import com.ib.client.EJavaSignal;
import com.ib.client.EReaderSignal;

public class Main {

    public static void main(String[] args) {
        // Signal processing with TWS, we will not be using it
        EReaderSignal readerSignal = new EJavaSignal();

        IBDatastore ibDatastore = new IBDatastore();

        IBReceiver ibReceiver = new IBReceiver(ibDatastore);
        EClientSocket clientSocket = new EClientSocket(ibReceiver, readerSignal);
        IBBroker ibBroker = new IBBroker(clientSocket, ibDatastore);

        try
        {
            ibBroker.connect();

            // Wait for nextValidId
            for (int i=0; i<10; i++) {
                if (ibDatastore.nextValidId != null)
                    break;

                Thread.sleep(1000);
            }

            if (ibDatastore.nextValidId == null)
                throw new Exception("Didn't get a valid id from IB");

            // From here you can add the logic of your application
        }
        catch(Exception ex)
        {
            System.err.println(ex);
        }
        finally
        {
            ibBroker.disconnect();
            System.exit(0);
        }
    }
}

Notice, that before we issue any requests to IB we wait for nextValidId to be set. We use that Id when creating an order, but in general it indicates that the connection has been established and TWS is ready to receive requests.

Receiving Quotes

We will be using our broker to request quote information. We have to create a Contract and pass it to reqMktData. We also need to give unique int Ids to our instruments, IB will be giving those Ids back to us in the callback.

(IBBroker.java)

...
    public void subscribeQuoteData(int tickerId, String symbol, String exchange) {
        // full doc here - https://interactivebrokers.github.io/tws-api/classIBApi_1_1Contract.html
        Contract contract = new Contract(0, symbol, "STK", null, 0.0d, null,
                null, exchange, "USD", null, null, null,
                "SMART", false, null, null);

        // We are asking for additional shortable (236) and fundamental ratio (258) information.
        // The false says that we don't want a snapshot, we want a streaming feed of data.
        // https://interactivebrokers.github.io/tws-api/classIBApi_1_1EClient.html#a7a19258a3a2087c07c1c57b93f659b63
        __clientSocket.reqMktData(tickerId, contract, "236,258", false, null);
    }
...

For receiving information we will need to fill out tickPrice(), tickSize(), and tickGeneric() in IBReceiver to get the extra info we requested. For example, to modify tickPrice():

(IBReceiver.java)

...
    @Override
    public void tickPrice(int tickerId, int field, double price, int canAutoExecute) {
        if (field != 1 && field != 2 && field != 4)
            return;

        Tick tick = __ibDatastore.getLatestTick(tickerId);

        if (field == 1)
            tick.bid = price;
        else if (field == 2)
            tick.ask = price;
        else if (field == 4)
            tick.last = price;

        tick.modified_at = System.currentTimeMillis();
    }
...

The full list of field types are here: https://interactivebrokers.github.io/tws-api/tick_types.html

Placing Orders

Let’s modify our IBBroker to be able to place orders.

(IBBroker.java)

...
private void createOrder(String symbol, String exchange, int quantity, double price, boolean buy)
    {
        // moved this out into it's own method
        Contract contract = __createContract(symbol, exchange);

        int orderId = __ibDatastore.nextValidId;

        // https://interactivebrokers.github.io/tws-api/classIBApi_1_1Order.html
        Order order = new Order();
        order.clientId(__clientId);
        order.transmit(true);
        order.orderType("LMT");
        order.orderId(orderId);
        order.action(buy ? "BUY" : "SELL");
        order.totalQuantity(quantity);
        order.lmtPrice(price);
        order.account(__ibAccount);
        order.hidden(false);
        order.minQty(100);

        __clientSocket.placeOrder(orderId, contract, order);

        // We can either request the next valid orderId or just increment it
        __ibDatastore.nextValidId++;
    }
...

Then on the receiver side we are going to be looking at the order status. The order status will be called when you submit the order and then any time anything changes. You might receive multiple messages for the same thing.

(IBReceiver.java)

...
    @Override
    public void orderStatus(int orderId, String status, double filled, double remaining, double avgFillPrice, int permId, int parentId, double lastFillPrice, int clientId, String whyHeld) {
        /**
         * Here we can check on how our order did. If it partially filled, we might want to resubmit at a different price.
         * We might want to update our budget, so that we don't trade any more positions. Etc. All of this is a bit
         * beyond the scope of this tutorial.
         */
    }
...

Conclusion

The API itself is incredibly complicated, just as the TWS app itself is. You can trade various instruments – stocks, bonds, options, futures, etc. And there are all sorts of orders with all sorts of options. But this tutorial will hopefully get you started so that you can at least get something basic going and then add complexity to it as needed.

This tutorial’s code is on Github. If you need something more advanced, check out the full IB trader that I wrote a long time ago using the Groovy language.


Written by Eddie Svirsky

The post Getting Started with Interactive Brokers API in Java appeared first on QueWorx.

]]>
When and how to use outstaffing services? https://www.queworx.com/blog/outstaffing-services/ Fri, 03 Jan 2020 00:03:11 +0000 http://www.queworx.com/?p=2278 There are several different working models for hiring people to work on your projects. In short, outstaffing is hiring someone from another company to work for you. Not to be confused with outsourcing, which is hiring another company to do some work for you. Most frequently when people talk about outstaffing they are referring to …

When and how to use outstaffing services? Read More »

The post When and how to use outstaffing services? appeared first on QueWorx.

]]>
There are several different working models for hiring people to work on your projects. In short, outstaffing is hiring someone from another company to work for you. Not to be confused with outsourcing, which is hiring another company to do some work for you. Most frequently when people talk about outstaffing they are referring to software development, and that’s what this article will discuss.

When to use outstaffing?

When you have a project and need some software development done, you have a few options. You can hire employees, hire contractors, find a company that will do the project for you (outsource), or hire developers from another company to work for you (outstaff). These are just different models for hiring people to work on your software, each one with its own strengths and weaknesses, and you should use the appropriate one for your specific scenario.

Outsourcing is only really suitable when you have a well defined project to begin with, which is most often not the case. If you are building long term and your requirements are constantly changing, you want to control development. An ideal scenario for outsourcing, for example, would be adding an AI module to your current project. It’s a well defined project that you wouldn’t have the expertise in house to do, so you would set clear requirements and pass it off to a company that specializes in AI. They would then deliver a single self contained package and that specific engagement would be over.

If your use case doesn’t fit the outsourcing model then you have to consider hiring employees or contractors. Employees are permanent placements in your company. If you have an ongoing project, it makes sense for you to hire some employees to control development and keep knowledge in house. Contractors make sense when you are looking for a temporary engagement. For example, let’s say you have a tight deadline and you need more resources to shore up your team. Or, if you want an expert in some technology to come in, set it up, get the rest of your team up to speed on how to use it and then leave.

Outstaffing and hiring contractors are very similar. The only real difference is that you are either engaging contractors directly or going through an agency to engage them for you. The main benefit of going through an agency is that you don’t have to spend time doing recruitment, which is very time consuming. The agency, theoretically, already does recruitment full time and is good at screening candidates. They also have a large pool of proven candidates to call on. That also means that the agency will give you more flexibility to scale up or down than if you did it yourself.

Offshore outstaffing

You can outstaff both local or offshore staff. The big benefit of offshore staff is the massive reduction in costs. For the price of one employee, you can get 2 employees and still maintain the same level of quality. Your trade off is going to be language barriers and time zone issues.

A good model is hiring a combination of local and offshore resources to minimize the downsides, while still maintaining knowledge in house, and reducing costs. For example, a local team lead, that can communicate and manage the remote team. This is now becoming a great model since our remote tools are getting so much better (tools like Slack).

How to hire outstaffing services?

In general, choosing a good company is as essential as choosing a good developer. A bad outstaffing company will just try to fill bodies, and the quality of candidates that you will be getting will be sub-par.

The best way to find a good company, as always, is word of mouth. If someone you know is happy with a company, that’s a good indicator that the company takes its job seriously and can be trusted.

The next best way is user reviews, although these are not always reliable. In the US, these companies are known as “staffing agencies”. If you go to clutch.co you can see a large list of local and offshore staffing companies, with reviews. beststaffingagencies.com also has a large list of staffing agency reviews and scores.

If you are looking for offshore staff, you can also go on Upwork and engage with one of the outsourcing agencies there, they are all willing to outstaff as well as outsource and all have lots of reviews. Make sure to engage with a reputable company, and not just the lowest cost provider. The decision to do offshore or local staff is another big topic that will have to be discussed in a different article.


Written by Eddie Svirsky

The post When and how to use outstaffing services? appeared first on QueWorx.

]]>