Parquet file format

Parquet is an open source file format available to any project in the Hadoop ecosystem. Apache Parquet is designed for efficient as well as performant flat columnar storage format of data compared to row based files like CSV or TSV files.

Parquet uses the record shredding and assembly algorithm which is superior to simple flattening of nested namespaces. Parquet is optimized to work with complex data in bulk and features different ways for efficient data compression and encoding types.  This approach is best especially for those queries that need to read certain columns from a large table. Parquet can only read the needed columns therefore greatly minimizing the IO.

Advantages of Storing Data in a Columnar Format:

  • Columnar storage like Apache Parquet is designed to bring efficiency compared to row-based files like CSV. When querying, columnar storage you can skip over the non-relevant data very quickly. As a result, aggregation queries are less time consuming compared to row-oriented databases. This way of storage has translated into hardware savings and minimized latency for accessing data.
  • Apache Parquet is built from the ground up. Hence it is able to support advanced nested data structures. The layout of Parquet data files is optimized for queries that process large volumes of data, in the gigabyte range for each individual file.
  • Parquet is built to support flexible compression options and efficient encoding schemes. As the data type for each column is quite similar, the compression of each column is straightforward (which makes queries even faster). Data can be compressed by using one of the several codecs available; as a result, different data files can be compressed differently.
  • Apache Parquet works best with interactive and serverless technologies like AWS Athena, Amazon Redshift Spectrum, Google BigQuery and Google Dataproc.

Difference Between Parquet and CSV

CSV is a simple and widely spread format that is used by many tools such as Excel, Google Sheets, and numerous others can generate CSV files. Even though the CSV files are the default format for data processing pipelines it has some disadvantages:

  • Amazon Athena and Spectrum will charge based on the amount of data scanned per query.
  • Google and Amazon will charge you according to the amount of data stored on GS/S3.
  • Google Dataproc charges are time-based.

Parquet has helped its users reduce storage requirements by at least one-third on large datasets, in addition, it greatly improved scan and deserialization time, hence the overall costs.

Why is Freelancing called Freelancing

The term “freelance” dates back to the 1800s when a “free lance” referred to a medieval mercenary who would fight for whichever nation or person paid them the most. The term “lance” referred to the long weapon that knights on horseback used to knock opponents off of their horses (think jousting)

Over time, the term continued to mean “independent” but left the battlefield to be applied to politics and finally work of any kind.

How does freelancing work

Freelancers accept payment in return for providing some sort of service. That agreement is generally part-time or short term.

For example, if I hired a photographer to take new headshots for me, I could pay a freelancer for that session and that would be the end of it.

Sometimes people pay freelancers to work a set number of hours per week or per month. That arrangement is often referred to as a “retainer.”

A retainer refers to when you retain the services or right to someone’s time. A lot of legal professionals work on retainer. Every month, they bill a set amount of time to the client, regardless of whether that full time is used or not.

It’s really one of the simplest and most pure forms of entrepreneurship: the freelancer provides a specific service or outcome, and the buyer pays them a fee directly.