00. list, array, numpy 데이터를 bytes binary로 변환하기 Python

numpy 또는 list 등을 binary 데이터로 저장하고 싶을때나 반대로 binary data를 numpy나 list로 변환하고 싶을때가 있다. 이럴때 사용해보자.

데이터 변환문자 참고 https://docs.python.org/3/library/struct.html https://docs.python.org/3/library/array.htmlhttps://docs.scipy.org/doc/numpy-1.13.0/user/basics.types.html

int를 binary로 변환
>>> struct.pack('i', 1)b'\x01\x00\x00\x00'>>> struct.pack('ii', 77, 88)b'M\x00\x00\x00X\x00\x00\x00'
binary를 int로 변환
>>> struct.unpack('i', b'\x01\x00\x00\x00')(1,)>>> struct.unpack('ii', b'M\x00\x00\x00X\x00\x00\x00')(77, 88)
float list를 binary로 변환
>>> x = array('f', [3.141592, 7.2])>>> bytes(x)b'\xd8\x0fI@ff\xe6@'
binary를 float list로 변환
# [3.141592, 7.2]>>> data = b'\xd8\x0fI@ff\xe6@' >>> list(array('f', data)[3.141592025756836, 7.199999809265137]
binary를 numpy float list로 변환
# [3.141592, 7.2]>>> data = b'\xd8\x0fI@ff\xe6@'  >>> np.frombuffer(data, dtype=np.float32)array([ 3.14159203,  7.19999981], dtype=float32)
numpy float list를 binary로 변환
>>> np.array([ 3.14159203,  7.19999981], dtype=np.float32).tobytes()b'\xd8\x0fI@ff\xe6@'

Eclipse 실행시 exit code=1 에러 Ubuntu / Linux



eclipse버전과 맞는 java 버전을 찾을수 없을때 발생한다.

즉 java가 설치되어 있지 않거나 여러버전의 java가 설치되어 있지만 기본 java버전과 맞지 않기 때문일 수 있다.

아래와 같이 실행하면 기본으로 사용할 java를 설정할 수 있다.

    sudo update-alternatives --config java

기본으로 실행될 java버전을 바꾸는 방법도 있지만 가능하면 ini파일을 수정하여 eclipse실행시에만 특정 java버전을 실행하도록 해주자.

eclipse.ini파일을 열고 -vmargs 윗줄에 -vm을 추가해주자.

그리고 아랫쪽에 java의 경로를 입력해주면 된다.

-startup
plugins/org.eclipse.equinox.launcher_1.3.100.v20150511-1540.jar
--launcher.library
plugins/org.eclipse.equinox.launcher.gtk.linux.x86_64_1.1.300.v20150602-1417
-product
org.eclipse.epp.package.cpp.product
--launcher.defaultAction
openFile
-showsplash
org.eclipse.platform
--launcher.XXMaxPermSize
256m
--launcher.defaultAction
openFile
--launcher.appendVmargs

-vm
/usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java

-vmargs
-Dosgi.requiredJavaVersion=1.7
-XX:MaxPermSize=256m
-Xms256m
-Xmx1024m

Numpy로 최고의 성능 얻기 Python

[원문] : http://ipython-books.github.io/featured-01/


NumPy is the cornerstone of the scientific Python software stack. It provides a special data type optimized for vector computations, the ndarray. This object is at the core of most algorithms in scientific numerical computing.

With NumPy arrays, you can achieve significant performance speedups over native Python, particularly when your computations follow the Single Instruction, Multiple Data (SIMD)paradigm. However, it is also possible to unintentionally write non-optimized code with NumPy.

In this featured recipe, we will see some tricks that can help you write optimized NumPy code. We will start by looking at ways to avoid unnecessary array copies in order to save time and memory. In that respect, we will need to dig into the internals of NumPy.

Learning to avoid unnecessary array copies

Computations with NumPy arrays may involve internal copies between blocks of memory. These copies are not always necessary, in which case they should be avoided. Here are a few tips that can help you optimize your code accordingly.

import numpy as np

Inspect the memory address of arrays

  1. The first step when looking for silent array copies is to find out the location of arrays in memory. The following function does just that:
def id(x):    # This function returns the memory    # block address of an array.    return x.__array_interface__['data'][0]
  1. You may sometimes need to make a copy of an array, for instance if you need to manipulate an array while keeping an original copy in memory.
a = np.zeros(10); aid = id(a); aid
71211328
b = a.copy(); id(b) == aid
False

Two arrays with the same data location (as returned by id) share the underlying data buffer. However, the opposite is only true if the arrays have the same offset (meaning that they have the same first element). Two shared arrays with different offsets will have slightly different memory locations, as shown in the following example:

id(a), id(a[1:])
(71211328, 71211336)

In this recipe, we'll make sure to use this method with arrays that have the same offset. Here is a more reliable solution for finding out if two arrays share the same data:

def get_data_base(arr):    """For a given Numpy array, finds the    base array that "owns" the actual data."""    base = arr    while isinstance(base.base, np.ndarray):        base = base.base    return basedef arrays_share_data(x, y):    return get_data_base(x) is get_data_base(y)
print(arrays_share_data(a,a.copy()),      arrays_share_data(a,a[1:]))
False True

Thanks to Michael Droettboom for pointing out this precision and proposing this alternative solution.

In-place and implicit copy operations

  1. Array computations can involve in-place operations (first example below: the array is modified) or implicit-copy operations (second example: a new array is created).
a *= 2; id(a) == aid
True
c = a * 2; id(c) == aid
False

Be sure to choose the type of operation you actually need. Implicit-copy operations are significantly slower, as shown here:

%%timeit a = np.zeros(10000000)a *= 2
10 loops, best of 3: 19.2 ms per loop
%%timeit a = np.zeros(10000000)b = a * 2
10 loops, best of 3: 42.6 ms per loop
  1. Reshaping an array may or may not involve a copy. The reasons will be explained below. For instance, reshaping a 2D matrix does not involve a copy, unless it is transposed (or more generally non-contiguous):
a = np.zeros((10, 10)); aid = id(a); aid
53423728

Reshaping an array while preserving its order does not trigger a copy.

b = a.reshape((1, -1)); id(b) == aid
True

Transposing an array changes its order so that a reshape triggers a copy.

c = a.T.reshape((1, -1)); id(c) == aid
False

Therefore, the latter instruction will be significantly slower than the former.

  1. The flatten and ravel methods of an array reshape it into a 1D vector (flattened array). The former method always returns a copy, whereas the latter returns a copy only if necessary (so it's significantly faster too, especially with large arrays).
d = a.flatten(); id(d) == aid
False
e = a.ravel(); id(e) == aid
True
%timeit a.flatten()
1000000 loops, best of 3: 881 ns per loop
%timeit a.ravel()
1000000 loops, best of 3: 294 ns per loop

Broadcasting rules

  1. Broadcasting rules allow you to make computations on arrays with different but compatible shapes. In other words, you don't always need to reshape or tile your arrays to make their shapes match. The following example illustrates two ways of doing an outer product between two vectors: the first method involves array tiling, the second one involves broadcasting. The last method is significantly faster.
n = 1000
a = np.arange(n)ac = a[:, np.newaxis]ar = a[np.newaxis, :]
%timeit np.tile(ac, (1, n)) * np.tile(ar, (n, 1))
100 loops, best of 3: 10 ms per loop
%timeit ar * ac
100 loops, best of 3: 2.36 ms per loop

Making efficient selections in arrays with NumPy

NumPy offers multiple ways of selecting slices of arrays. Array views refer to the original data buffer of an array, but with different offsets, shapes and strides. They only permit strided selections (i.e. with linearly spaced indices). NumPy also offers specific functions to make arbitrary selections along one axis. Finally, fancy indexing is the most general selection method, but it is also the slowest as we will see in this recipe. Faster alternatives should be chosen when possible.

  1. Let's create an array with a large number of rows. We will select slices of this array along the first dimension.
n, d = 100000, 100
a = np.random.random_sample((n, d)); aid = id(a)

Array views and fancy indexing

  1. Let's select one every ten rows, using two different methods (array view and fancy indexing).
b1 = a[::10]b2 = a[np.arange(0, n, 10)]
np.array_equal(b1, b2)
True
  1. The view refers to the original data buffer, whereas fancy indexing yields a copy.
id(b1) == aid, id(b2) == aid
(True, False)
  1. Let's compare the performance of both methods.
%timeit a[::10]
1000000 loops, best of 3: 804 ns per loop
%timeit a[np.arange(0, n, 10)]
100 loops, best of 3: 14.1 ms per loop

Fancy indexing is several orders of magnitude slower as it involves copying a large array.

Alternatives to fancy indexing: list of indices

  1. When non-strided selections need to be done along one dimension, array views are not an option. However, alternatives to fancy indexing still exist in this case. Given a list of indices, NumPy's function take performs a selection along one axis.
i = np.arange(0, n, 10)
b1 = a[i]b2 = np.take(a, i, axis=0)
np.array_equal(b1, b2)
True

The second method is faster:

%timeit a[i]
100 loops, best of 3: 13 ms per loop
%timeit np.take(a, i, axis=0)
100 loops, best of 3: 4.87 ms per loop

Alternatives to fancy indexing: mask of booleans

  1. When the indices to select along one axis are specified by a vector of boolean masks, the function compress is an alternative to fancy indexing.
i = np.random.random_sample(n) < .5

The selection can be made using fancy indexing or the np.compress function.

b1 = a[i]b2 = np.compress(i, a, axis=0)
np.array_equal(b1, b2)
True
%timeit a[i]
10 loops, best of 3: 59.8 ms per loop
%timeit np.compress(i, a, axis=0)
10 loops, best of 3: 24.1 ms per loop

The second method is also significantly faster than fancy indexing.

Fancy indexing is the most general way of making completely arbitrary selections of an array. However, more specific and faster methods often exist and should be preferred when possible.

Array views should be used whenever strided selections have to be done, but one needs to be careful about the fact that views refer to the original data buffer.

How it works?

In this section, we will see what happens under the hood when using NumPy, and how this knowledge allows us to understand the tricks given in this recipe.

Why are NumPy arrays efficient?

A NumPy array is basically described by metadata (number of dimensions, shape, data type, and so on) and the actual data. The data is stored in a homogeneous and contiguous block of memory, at a particular address in system memory (Random Access Memory, or RAM). This block of memory is called the data buffer. This is the main difference with a pure Python structure, like a list, where the items are scattered across the system memory. This aspect is the critical feature that makes NumPy arrays so efficient.

Why is this so important? Here are the main reasons:

  1. Array computations can be written very efficiently in a low-level language like C (and a large part of NumPy is actually written in C). Knowing the address of the memory block and the data type, it is just simple arithmetic to loop over all items, for example. There would be a significant overhead to do that in Python with a list.

  2. Spatial locality in memory access patterns results in significant performance gains, notably thanks to the CPU cache. Indeed, the cache loads bytes in chunks from RAM to the CPU registers. Adjacent items are then loaded very efficiently (sequential locality, or locality of reference).

  3. Data elements are stored contiguously in memory, so that NumPy can take advantage of vectorized instructions on modern CPUs, like Intel's SSE and AVX, AMD's XOP, and so on. For example, multiple consecutive floating point numbers can be loaded in 128, 256 or 512 bits registers for vectorized arithmetical computations implemented as CPU instructions.

Additionally, let's mention the fact that NumPy can be linked to highly optimized linear algebra libraries like BLAS and LAPACK, for example through the Intel Math Kernel Library (MKL). A few specific matrix computations may also be multithreaded, taking advantage of the power of modern multicore processors.

In conclusion, storing data in a contiguous block of memory ensures that the architecture of modern CPUs is used optimally, in terms of memory access patterns, CPU cache, and vectorized instructions.

What is the difference between in-place and implicit-copy operations?

Let's explain trick 3. An expression like a *= 2 corresponds to an in-place operation, where all values of the array are multiplied by two. By contrast, a = a * 2 means that a new array containing the values of a * 2 is created, and the variable a now points to this new array. The old array becomes unreferenced and will be deleted by the garbage collector. No memory allocation happens in the first case, contrary to the second case.

More generally, expressions like a[i:j] are views to parts of an array: they point to the memory buffer containing the data. Modifying them with in-place operations changes the original array. Hence, a[:] = a * 2 results in an in-place operation, unlike a = a * 2.

Knowing this subtlety of NumPy can help you fix some bugs (where an array is implicitly and unintentionally modified because of an operation on a view), and optimize the speed and memory consumption of your code by reducing the number of unnecessary copies.

Why cannot some arrays be reshaped without a copy?

We explain here trick 4, where a transposed 2D matrix cannot be flattened without a copy. A 2D matrix contains items indexed by two numbers (row and column), but it is stored internally as a 1D contiguous block of memory, accessible with a single number. There is more than one way of storing matrix items in a 1D block of memory: we can put the elements of the first row first, the second row then, and so on, or the elements of the first column first, the second column then, and so on. The first method is called row-major order, whereas the latter is called column-major order. Choosing between the two methods is only a matter of internal convention: NumPy uses the row-major order, like C, but unlike FORTRAN.

Array layout

More generally, NumPy uses the notion of strides to convert between a multidimensional index and the memory location of the underlying (1D) sequence of elements. The specific mapping between array[i1, i2] and the relevant byte address of the internal data is given by

offset = array.strides[0] * i1 + array.strides[1] * i2

When reshaping an array, NumPy avoids copies when possible by modifying the stridesattribute. For example, when transposing a matrix, the order of strides is reversed, but the underlying data remains identical. However, flattening a transposed array cannot be accomplished simply by modifying strides (try it!), so a copy is needed (thanks to Chris Beaumont from Harvard for clarifying an earlier version of this paragraph).

Recipe 4.6 (Using stride tricks with NumPy) contains a more extensive discussion on strides. Also, recipe 4.7 (Implementing an efficient rolling average algorithm with stride tricks) shows how one can use strides to accelerate particular array computations.

Internal array layout can also explain some unexpected performance discrepancies between very similar NumPy operations. As a small exercise, can you explain the following benchmarks?

a = np.random.rand(5000, 5000)%timeit a[0,:].sum()%timeit a[:,0].sum()
100000 loops, best of 3: 9.57 µs per loop10000 loops, best of 3: 68.3 µs per loop

What are NumPy broadcasting rules?

Broadcasting rules describe how arrays with different dimensions and/or shapes can still be used for computations. The general rule is that two dimensions are compatible when they are equal, or when one of them is 1. NumPy uses this rule to compare the shapes of the two arrays element-wise, starting with the trailing dimensions and working its way forward. The smallest dimension is internally stretched to match the other dimension, but this operation does not involve any memory copy.

References

Here are a few references:

You will find related recipes on the book's repository.

You'll find the rest of the chapter in the full version of the IPython Cookbook, by Cyrille Rossant, Packt Publishing, 2014.


numpy에서 matrix4x4를 복사하는 방법에 대한 퍼포먼스 테스트 Python

python에서 numpy로 array를 복사하는 방법에는 여러가지가 있는데 갑자기 성능에대한 궁금증이 생겨서 몇가지 테스트를 해보았습니다. 의외로 방법들마다 차이가 있었습니다.ㅎ


numpy에서 matrix4x4를 복사하는 방법에 대한 퍼포먼스 테스트 ( 각 케이스당 100,000번 수행 )


a=np.eye(4) # matrix4x4

b=np.eye(4)


# 이건 값을 복사하는 케이스가 아니기 때문에 테스트 하지 않음

a = b


# 주의할점은 이 두가지 방법은 값에 의한 복사가 아니다. numpy객체 자체는 다르지만 내부의 리스트는 같은 원소이다. 즉, a의 원소값을 바꾸면 b에도 동일하게 적용되기 때문에 주의해야 한다.

a = b[...] 0.0281984806060791

a = b[:] 0.03184771537780762


# 아래의 방법들은 실제로 값이 복사되는 방법들이다. 빠른순으로 정렬

a[...] = b 0.06301045417785645

a[...] = b[...] 0.08553266525268555

a = b.copy() 0.08831429481506348

a[:] = b 0.08985710144042969

a[:] =b[:] 0.11564087867736816

a[...] = b.copy() 0.14728522300720215


Unity3D install on ubuntu Unity

https://www.linuxhint.com/install-unity3d-linux/

How to install Unity, a flexible and powerful development platform for creating multiplatform 3D and 2D games as well as an interactive experiences on Linux. With Unity, you can target more devices easily, and with a single click you can deploy your game to mobile, VR, desktop, Web, Console as well as TV platforms.

Furthermore, it’s a complete ecosystem for anyone who aims to build a business on creating high-end content and connecting to their most loyal and enthusiastic players and customers.

Before we proceed with how to install unity, lets see some of the supported platforms, as well as updates to this release.

Supported Target Platforms For Linux

The Unity Editor for Linux supports export to the following platforms:

  • Linux, Windows as well as Mac Standalone
  • Android, WebGL, Tizen as well as SamsungTV
  • Legacy WebPlayer
  • iOS project deployment (experimental in 5.5 builds)

Note that your desktop machines needs a modern graphics card with vendor-supported graphics drivers (provided by NVIDIA, AMD, or Intel) for it to run on Linux.

install unity 3d

Unity 5.5.1 Update Changelog

Improvements

  • Graphics: Added support for feature level 11.1 on D3D11/D3D12. This brings native support for RGB565 as well as ARGB1555 RenderTexture formats. Note that this does not render correctly for ARGB4444 which will be fixed in one of the future releases.
  • Graphics: An error message is shown in the console for platforms that don’t support linear color space rendering with OpenGL ES
  • macOS/iOS/tvOS: Allow using Xcode’s manual signing workflow by specifying a provisioning profile in Player Settings.
  • Metal: Improved handling of transparent rendering after post-opaque image effects when using MSAA.
  • Shaders: If an unknown/unhandled error occurs during shader compilation, append it to the shader compiler error message. This gives some context on what might be wrong in the shader.
  • Shaders: Optimized in-editor import, load time as well as memory usage for shaders with massive amounts of potential variants.
  • Unity Ads: Updated native binaries to version 2.0.6.

Changes

  • Test Runner: Removed script templates for test runner (as it is not released)
  • Editor: Fix for an editor crash when switching platforms on a command line build shipped in 5.5.0p4 has been backed out as it requires further tests

See release notes for complete details

How to install Unity 5.5.1f1 build update on Ubuntu 17.04, Ubuntu 16.10, Ubuntu 16.04, Ubuntu 15.04, Ubuntu 14.04

sudo apt-get install gdebi
wget http://beta.unity3d.com/download/f5287bef00ff/unity-editor_amd64-5.5.1xf1Linux.deb
sudo gdebi unity-editor_amd64-5.5.1xf1Linux.deb

How to remove Unity from Ubuntu

sudo apt-get remove unity-editor


DirectX vs OpenGL 차이 OpenGL / Vulkan / DircetX

https://www.gamedev.net/articles/programming/graphics/perspective-projections-in-lh-and-rh-systems-r3598/

I have been writing an DirectX / OpenGL rendering engine recently. As you may know, DirectX is by default associated with a left-handed coordinate system (LH) and OpenGL with a right-handed system (RH). You can compare both of them in the article title image to the right. You can look at those two systems in another way. If you want to look in a positive direction, for LH, you have Y as UP axis and for RH, you have Z as UP axis. If you dont see it, rotate the RH system in the image. Today, in time of shaders and, you can use one or another in both systems, but you need to take care of few things. I have calculated both versions of matrices for both systems. I am tired of remembering everything and/or calculating it all over again. For this reason I have created this document, where I summarize needed combinations and some tips & tricks. This is not meant to be a tutorial "How projection works" or "Where those values come from". It is for people who are tired of looking how to convert one system to another or how to convert one API to another. Or it is for those who don't care "why" but they are happy to copy & paste of my equations (however, don't blame me if there is something wrong). RH system has become some kind of a standard in a computer graphics. However, for my personal purposes, LH system seems more logical to visualise. In my engine, I wanted to give the decision to the user. At the end, my system supports both orientations. If we looked more closely at DirectX and OpenGL, we can see one important difference in a projection. It doesn't matter if we use LH or RH system, in DirectX projection is mapped to interval [0, 1] while in OpenGL to [-1, 1]. What does that mean? If we take the near clipping plane of a camera, it will be always mapped to 0 for DirectX, but in OpenGL it is more complicated. For LH system, near will be 1, but for RH, it will became -1 (see graphs 5 and 6 in a later section). Of course, we can use DirectX mapping in OpenGL (not the other way), but in that case, we are throwing away half of the depth buffer precision. In the following sections, we will discuss this more closely. Personally, I think that whoever invented OpenGL depth coordinates must have had a twisted sense for humour. DirectX's solution is far better and easier to understand.
Matrix order used in this article will be row based. All operations will be done in order vector . matrix (as we can see at (1) ) with indexing from (2). matrix_eq1.png (1) matrix_eq2.png(2) For column based matrix, order of operations will be reversed - matrix ? vector (as we can see at 3). You also need to change elements of matrix, as you can see from example. matrix_eq3.png (3) In a time of a fixed function pipeline, that was more problematic than today. In a time of shaders, we may use whatever system and layout we want and just change the order of operations or read values from the different positions in matrices.

World to View transformation

In every transformation pipeline, we need to first transform geometry from the world coordinates to a view (camera) space. After that, you can do a projection transformation. View matrix must use the same system as your final projection, so it must be LR or RH. This section is mentioned only for complete look up, so you know how to transform a point. There will be no additional details for view transformation. View matrix has the same layout for both of the systems (4) matrix_eq4.png (4) Differences are in base vectors and the last row elements calculation. You can see it in table 1. [table][tr][td][/td][td]LH[/td][td]RH[/td][/tr][tr][td]look[/td][td]|wLook - eye|[/td][td]|eye - wLook|[/td][/tr][tr][td]right[/td][td]|wUp x look|[/td][td]|wUp x look|[/td][/tr][tr][td]up[/td][td]|look x right|[/td][td]|look x right|[/td][/tr][tr][td]A[/td][td]-dot(right,eye)[/td][td]dot(right,eye)[/td][/tr][tr][td]B[/td][td]-dot(up, eye)[/td][td]dot(up, eye)[/td][/tr][tr][td]C[/td][td]-dot(look, eye)[/td][td]dot(look, eye)[/td][/tr][/table] Table 1: View vectors calculation. wLook is camera lookAt target, eye is camera position and wUp is camera up vector - usually [0,1,0]. "x" stands for a vector product

Perspective projection

For "3D world" rendering, you will probably use a perspective projection. Most of the time (like in 90% of cases) you will need a simplified perspective matrix (with a symmetric viewing volume). Pattern for such a projection matrix can be seen at 5. As you can see, this pattern is symmetric. For column and row major matrices, this simplified pattern will be the same, but values of D and E will be transposed. Be aware of this, it can cause some headaches if you do it the other way and not notice it. matrix_eq5.png (5) Now, how projection works. We have an input data in the view space coordinates. From those we need to map them into our screen. Since our screen is 2D (even if we have so called 3D display), we need to map a point to our screen. We take a simple example: matrix_eq_x.png matrix_eq6.png(6) matrix_eq7.png (7) where x,y,z,w is an input point ( w is a homogenous coordinate, if we want to "debug" on a paper, the best way is to choose this value as 1.0). Division by ( D . z ) is performed automatically after vertex shader stage. From equations 6 we have coordinates of a point on 2D screen. You may see, that those values are not coordinates of pixel (like [756, 653]), but they are in a range [-1, 1] for both axis (in DirectX and also in OpenGL). From equation 7 we have depth of pixel in range [0, 1] for DirectX and [-1, 1] for OpenGL. This value is used in depth buffer for closer / distant object recognition. Later on, we show how depth values look like. Those +1 / -1 values, that you will obtain after projection, are known as a normalized device coordinates (NDC). They form a cube, where X and Y axis are in interval [-1, 1] for DirectX and OpenGL. Z axis is more tricky. For DirectX, you have an interval [0, 1] and for OpenGL [-1, 1] (see 2). As you can see now, NDC is a LH system, doesn't matter what input system you have chosen. Everything, that is inside of this cube, is visible on our screen. Screen is taken as a cube face at Z = 0 (DirectX), Z = 1 (OpenGL LH) or Z = -1 (OpenGL RH). What you see on your screen is basically content of a NDC cube pressed to single plane.
fig2.png Figure 2: OpenGL (Left) and DirectX (Right) NDC
We summarize computations for LH / RH system and for DirectX and OpenGL in two different tables. Those values are different for LH / RH system and of course for API used. In following sections, you can spot the differences. If you are interested where those values come from, look elsewhere (for example OpenGL matrices are explained here: Link). There are plenty of resources and it will be pointless to go over it again here. DirectX table2.png Table 2: Projection matrix calculation for DirectX. Input parametrs are: fovY - field of view in Y direction, AR - aspect ratio of screen, n - Z-value of near clipping plane, f - Z-value of far clipping plane Changing only values at the projection matrix won't work as expected. If we render same scene with same DirectX device settings, we end up with turned scene geometry for one of those matrices. This is caused by depth comparison in depth buffer. To change this settings is a little longer in DirectX, than for OpenGL. You need to call functions in code snippet 1 with values in table 3.deviceContext->ClearDepthStencilView(depthStencilView, D3D11_CLEAR_DEPTH, 1.0f, 0);....depthStencilDesc.DepthFunc = D3D11_COMPARISON_LESS_EQUAL; device->CreateDepthStencilState(&depthStencilDesc,&depthStencilState); deviceContext->OMSetDepthStencilState(depthStencilState, 1); Code 1: Code snippet settings for LH DirectX rendering [table][tr][td][/td][td]LH[/td][td]RH[/td][/tr][tr][td]D3D11_CLEAR_DEPTH[/td][td]1.0[/td][td]0.0[/td][/tr][tr][td]depthStencilDesc.DepthFunc[/td][td]D3D11_COMPARISON_LESS_EQUAL[/td][td]D3D11_COMPARISON_GREATER_EQUAL[/td][/tr][/table] Table 3: OpenGL setting for both systems OpenGL table4.png Table 4: Projection matrix calculation for OpenGL. Input parametrs are: fovY - field of view in Y direction, AR - aspect ratio of screen, n - Z-value of near clipping plane, f - Z-value of far clipping plane Again, changing only values at the projection matrix won't work as expected. If we render same scene with same OpenGL device settings, we end up with turned scene geometry for one of those matrices. This is caused by depth comparison in depth buffer. We need to change two things as we see in table 5. [table][tr][td]LH[/td][td]RH[/td][/tr][tr][td]glClearDepth(0)[/td][td]glClearDepth(1)[/td][/tr][tr][td]glDepthFunc(GL_GEQUAL)[/td][td]glDepthFunc(GL_LEQUAL)[/td][/tr][/table] Table 5: OpenGL setting for both systems Conclusion If you set the comparison and depth buffer clear values incorrectly, most of the time, you will end up with result like on the figure 3. Correct scene should look like on the figure 4.
fig3.png Figure 3: Incorrectly set depth function and clear for current projectionfig4.png Figure 4: Correctly set depth function and clear for current projection
Using equation 6, we can calculate projected depth for any input value. If we do this for values in interval [near, far], we will get the following result (see image 5 and 6). Notice second graph x-axis. For RH system, we need to change sign of near to -near in order to obtain same results as for LH system. That means in plain language, that for LH we are looking in positive Z direction and for RH we are looking in negative Z direction. In both cases, viewer is located at origin.
fig5.pngFigure 5: Projected depth with DirectX and OpenGL LH matrices (values used for calculation: near = 0.1, far= 1.0)fig6.pngFigure 6: Projected depth with DirectX and OpenGL RH matrices (values used for calculation: near = -0.1, far= -1.0)
From above graphs, we can see that for the distances near to the camera, there is a good precision in the depth buffer. On the other hand, for larger values the precision is limited. That is not always desired. One possible solution is to keep your near and far distances together as close as possible. There will be less problems if you use interval [0.1, 10] instead of [0.1, 100]. This is not always possible if we want to render large 3D world enviroments. This issue can be however solved as we show in the next section.

Depth precision

As mentioned before, using a classic perspective projection brings us a limited depth precision. The bigger the distance from viewer, the lower precision we have. This problem is often noticable as flickering pixels in distance. We can partially solve this by logarithmic depth. We decrease precision for near surroundings, but we have almost linear distribution throughout the depth range. One disadvantage is that logarithm is not working for negative input. Triangles, that are partially visible and have some points behind viewer (negative Z axis), won't be calculated correctly. Shader programs usually won't crash with negative logarithm, but the result is not defined. There are two possible solutions for this problem. You either tesselate your scene to have triangles so small, that the problem won't matter, or you can write your depth in a pixel shader. Writing depth in a pixel shader brings disadvantage with turned off depth testing for geometry before rasterizing. There could be some performance impact, but you can limit it by doing this trick only for near geometry, that could be affected. That way, you will need a condition in your shader or use different shaders based on geometry distance from viewer. If you use this modification, be aware of one thing: The depth from vertex shader has range [-1, 1], but gl_FragDepth has range [0, 1]. It's again something OpenGL only, since DirectX has depth in [0, 1] all the time. For a more detailed explenation, you can read an excellent article at Outtera blog (Link). Equations in their solution are using RH system (they aimed primary for OpenGL). So once again, we show same equation in LH and RH system. Both version are at table 6. This time only for OpenGL, since in DirectX problem can be solved, as proposed in article, by swapping near and far. [table][tr][td]LH[/td][td]gl_Position.z = (-2.0) * log((-gl_Position.z) * C + 1.0) / log(far * C + 1.0) + 1.0[/td][/tr][tr][td]RH[/td][td]gl_Position.z = (2.0) * log((gl_Position.z) * C + 1.0) / log(far * C + 1.0) - 1.0[/td][/tr][/table] Table 6:Calculation of new Z coordinate for depth using log. C is linearized component, default value is 1.0, far is camera far plane distance, gl_Position is output value from vertex shader (in perspective projection). You MUST remember to multiply gl_Position.z by gl_Position.w before returning it from shader. If you have read the Outtera article and looked at my equations, you may notice that I used gl_Position.z in logarithm calculations. I don't know if it is a mistake by Outtera, but with W, I have nearly same results for RH system (as if I used Z), but LH is totally messed up. Plus, W is already linearized depth (distance of point from viewer). So first visible point has W = near and last one has W = far. If we plot classic vs logarithm depth with equations from 6, we end up with the two following graphs. Red curve is same as in previous chapter, green one is our logarithmic depth.
fig7.pngFigure 7: Projected depth with classic perspective and with logarithmic one in LH (values used for calculation: near = 0.1, far = 1.0, C = 1.0)fig8.pngFigure 8: Projected depth with classic perspective and with logarithmic one in RH (values used for calculation: near = 0.1, far = 1.0, C = 1.0)
You can observe the effect of both projections (classic and logarithmic one) at this video (rendered with LH projection in OpenGL):
">

Oblique projection

Last section related to a projection will be a little different. So far, we have discussed perspective projection and precision for rendering. In this section, another important aspect will be converted to LH and RH system and to OpenGL / DirectX. Oblique projection is not some kind of special projection, that makes everything shiny. It is classic perspective projection, only with improved clipping planes. Clipping plane for classic projection is near and far, but here we change near to get different effect. This kind of projection is mostly used for water reflection texture rendering. Of course, we can set clipping plane manually in OpenGL or in DirectX, but that won't work in a mobile version (OpenGL ES), a web version (WebGL) and in DirectX we will need a different set of shaders. Bottom line, solution with clipping plane is possible, but not as clean as oblique projection. First we need to precompute some data. For a clipping, we need obviously a clipping plane. We need it in our current projective space coordinates. This can be achieved by transforming our plane vector with transposed inverse of the view matrix (we are assuming that the world matrix is set as identity).Matrix4x4 tmp = Matrix4x4::Invert(viewMatrix);tmp.Transpose(); Vector4 clipPlane = Vector4::Transform(clipPlane, tmp); Now calculate the clip-space corner point opposite the clipping planefloat xSign = (clipPlane.X > 0) ? 1.0f : ((clipPlane.X < 0) ? -1.0f : 0.0f); float ySign = (clipPlane.Y > 0) ? 1.0f : ((clipPlane.Y < 0) ? -1.0f : 0.0f);Vector4 q = (xSign, ySign, 1, 1);Transform q into camera space by multiplying it with the inverse of the projection matrix. For a simplified calculation, we have already used an inverted projection matrix. DirectX In DirectX system, we need to be careful, because original article is using OpenGL projection space with Z coordinate in range [-1, 1]. This is not possible in DirectX, so we need to change equations and recalculate them with Z in a range [0, 1]. Following solution is valid for LH system:q.X = q.X / projection[0][0]; q.Y = q.Y / projection[1][1]; q.Z = 1.0f; q.W = (1.0f - projection[2][2]) / projection[3][2];float a = q.Z / Vector4::Dot(clipPlane, q); Vector4 m3 = a * clipPlane;OpenGL The following equations can be simplified, if we know handness of our system. Since we want to have an universal solution, I have used a full representation, that is independent on the used sytem.q.X = q.x / projection[0][0]; q.Y = q.Y / projection[1][1];q.Z = 1.0 / projection[2][3]; q.W = (1.0 / projection[3][2]) - (projection[2][2] / (projection[2][3] * Matrix.M[3][2])); float a = (2.0f * projection[2][3] * q.Z) / Vector4::Dot(clipPlane, q); Vector4 m3 = clipPlane * a; m3.Z = m3.Z + 1.0f;In calculation of m3.Z we can use directly addition of value +1.0. If we write separate equations for LH and RH system, we can see why:LH: m3.Z = m3.Z + projection[2][3]; //([2][3] = +1) RH: m3.Z = m3.Z - projection[2][3]; //([2][3] = -1) Final matrix composition Final composition of the projection matrix is easy. Replace the third column with our calculated vector.Matrix4x4 res = projection; res[0][2] = m3.X;res[1][2] = m3.Y;res[2][2] = m3.Z;res[3][2] = m3.W;

Attachment

I have added an Excel file with projection matrices. You can experiment for yourself by changing near and far, or any other parameters and see the differences in depth. This is the same file that I used for creation of posted graphs.

References

 1


Collada 파싱하기 OpenGL / Vulkan / DircetX

- https://www.khronos.org/files/collada_spec_1_5.pdf

- OpenGL에 기반을 둔 파일 포맷이므로 좌표계 시스템이나 각종 연산들이 OpenGL과 동일하다.

- 오른손 좌표계

- 기본적으로 Z Up Axis, -Y Forward, X Right 좌표계

- Column Major(행우선) Matrix로 되어 있다.

- vertex의 최종위치 구하기 공식 ( row major matrix기준 )
    - vPos = vPos * bind_shape_matrix * inv_bind_matrix * animation_matrix * bone_weight

Collada의 XML구조

<asset>
    <unit name> : 단위
    <up_axis> : Up Axis

<library_visual_scene>
    - Scene의 모든 오브젝트들의 World Transform을 저장하고 있다.
    - Bone의 Hierachy(계층구조)를 여기에서 알아낼수 있다. 주의할것은 library_visual_scene에 기록된 Bone의 Matrix는 그냥 참고용으로 생각해야 한다. 실제로 사용되는 Bone의 transform은 library_contollers에서 알아낼 수 있다.
    - 

- <library_controllers>
    - bind_shape_matrix : Skinning된 메쉬의 transform이다. 당연히 메쉬마다 정보를 가지고 있다.
    - inv_bind_matrix : Bone의 inversed transform matrix(역행렬), 메쉬에 적용할때 사용되며 Bone의 위치를 재구성할때에는 역행렬로 만든후 적용해야한다. 상대좌표계가 아니기 때문에 부모본과 상관없이 걍 사용하면 된다.
    - bone weight, bone index : 메쉬의 Bone정보

- <library_animations>
   - animation matrix : 상대좌표계이므로 최종 위치를 계산할때는 parent bone으로 부터 상속받아  child bone의 matrix를 누적해가며 계산해야 한다.

- <library_geometries>
    - mesh, uv texcoord, vertex color 
    - mesh의 transform에 bind_shape_matrix를 미리 계산하여도 상관없다.

- <library_materialis>
    - 기타정보, shader 코드

pyglet - get_devices error Python

pyglet.input.get_devices() 실행시 아래와 같은 에러가 발생

Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/python3.5/site-packages/pyglet/input/__init__.py", line 163, in get_devices
    xinput_get_devices(display))
  File "/python3.5/site-packages/pyglet/input/x11_xinput.py", line 332, in get_devices
    if not _have_xinput or not _check_extension(display):
  File "/python3.5/site-packages/pyglet/input/x11_xinput.py", line 325, in _check_extension
    ctypes.byref(first_error))
ctypes.ArgumentError: argument 2: <class 'TypeError'>: wrong type


참 별것아니지만 발견하기 까다로운 에러다. pyglet가 설치된 라이브러리 폴더를 찾아가서 x11_xinput.py파일을 열고 아래의 문자열 'XInputExtension'을 b'XInputExtension'으로 바꿔주면 된다.

def _check_extension(display):
    major_opcode = ctypes.c_int()
    first_event = ctypes.c_int()
    first_error = ctypes.c_int()
    xlib.XQueryExtension(display._display, b'XInputExtension', 
        ctypes.byref(major_opcode), 
        ctypes.byref(first_event),
        ctypes.byref(first_error))
    return bool(major_opcode.value)

DXT Format 용량 OpenGL / Vulkan / DircetX


fsck error on boot 에러 발생시 대처 Ubuntu / Linux

fsck from util-linux 2.26.2

/dev/sda6 contains a file system with errors, check forced.
/dev/sda6: Inodes that were part of a corrupted orphan linked list found.

/dev/sda6: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
        (i.e., without -a or -p options)
fsck exited with status code 4
The root filesystem on /dev/sda6 requires a manual fsck

Busybox v1.22.1 (Ubuntu 1:1.22.0-15ubuntu1) built in shell (ash)
Enter 'help' for a list of built-in commands.

(initramfs) _

이런 메시지와 함께 ubuntu가 부팅이 되지 않는다면 저기서 시키는데로 걍 fsck <대상경로> 를 실행시키면 된다. 

fsck /dev/sda6

1 2 3 4 5 6 7 8 9 10 다음