Spider-OS

login - registration

Back to the post list

Videocore : vertex shader
2019 Jul 3

Hello everyone. After two months of work (intermittently), here is a new post on Videocore.

In the previous post Videocore : textured cube in rotation, we saw how to program the Videocore to display a cube in 3D. The Videocore was responsible for the display, but the compute of the vertices was done on the ARM cpu side. I optimize the 3D calculation, entrusting this task to Videocore directly. To achieve this, you must use the Videocore in GL mode, which allows the vertex shading.

In this tutorial, I show you how to get a move and rotation of shapes step by step. With first, the filigree shape, then with colors, and textures.

The icing on the cake is the interactive demo, which allows you to test on your Raspberry, and rotate the spaceship yourself with the keyboard. An overview below :

It interests you ? It's over there.

Comments :

Aran (webmaster)
2019 Jul 3 23:27

1. Vertex shading

In the previous posts, Videocore was programmed in NV mode. That is to say that Videocore was given the coordinates of the cube vertices already calculated (by the CPU ARM). In addition, a single Videocore listed program called "fragment shader" was used to display each pixel of the cube after processing.

We will now program Videocore in GL mode. Which will give us a lot more possibilities. In this GL mode, three Videocore programs are used: the vertex shader, coordinate shader and fragment shader. Their roles are to modify the vertices that are provided to them as input. Here's how they follow each other :

Shaders
Figure 1

As input we give each vertex of the shape that we want to draw. Each vertex is composed of the X, Y, Z, and W coordinates. These vertices are read and modified by the vertex shader, for example to perform 3D rotations. The RGB colors or the coordinates of 2D textures S, and T are also given as vertex shader inputs. The colors and textures are transmitted directly to the fragment shader so that it displays the corresponding pixel.

The coordinate shader makes it possible to optimize rendering processing. It only needs the X, Y, Z and W coordinates. It also allows you to do some clipping (which we will not use in this tutorial to simplify the code).

2. Data structure

The communication between the different elements of the preceding diagram is done by exchanging data in very precise formats. To understand correctly, the code that follows, it is necessary to see how are interconnected the various components of Videocore. The Architecture Reference Guide has a complete diagram on page 13. But I made it a simpler one for our tutorial.

diagram VPM
Figure 2 : VPM block diagram

The attributes of the vertices are stored in memory (preferably in GPU memory). Then everything passes via the VPM, and it is the VCM that takes care of feeding it. The vertex shader retrieves the vertex attributes via the VPM_READ registry. It must then format this data in the PSE format as described in "Architecture Reference Guide" on page 60.
The formatted data in the VPM is made available from the PSE unit using the VPM_WRITE register.

An example of PSE format :

Vertex Format PSE
Figure 3 : shaded vertex format for PSE

Xs and Ys are the coordinates of the vertex encoded in 12.4 fixed point, which corresponds to the position of the pixel on the screen. Zs is the depth. For the moment, Wc is always coded with the value 1.0 in float.
Then there are optional entries, called varying, which give additional information, such as the RGB color of the pixel, or the coordinates of the vertices in the texture.
The varyings are then read by the fragment shader via the VARYING_READ register.

The coordinate shader also retrieves vertex attributes via the VPM and the VPM_READ registry. It then makes the shaded vertex available in PTB format. In our case, it will be without clipping.

An example of a PTB format :

Vertex Format PTB
Figure 4 : shaded coordinate format for PTB

3. GL Shader State Record

Vertices data and shaders are in memory, as we have just seen. The Videocore must indicate the address of the various elements, as well as the quantities of data in input and output of the VPM. This is done using a GL Shader State Record. For the proper functioning of the GL mode, we must configure it completely. Here is the description of all the fields :

GL Shader State Record
Figure 5 : GL Shader State Record

As you can see, it's a little more complicated than the NV format ;-) Do not panic, I'll explain everything.

Let's start with bytes 0 and 1. Bit 2 allows the activation of clipping, and therefore the use of the right PTB format in Figure 4. To simplify the code, we do not activate clipping, and we will use the left PTB with Ys, Xs, Zs and 1 / Wc only. We do not have a point size.

Fragment shader

Bytes 2 to 11 in red configure the fragment shader. Byte 3 indicates the number of varyings provided at the input of the fragment shader. For example 3, if we have the RGB components, or 2 for the texture coordinates S and T.
The code address of our fragment shader is indicated in position 4.

A new element appears in byte 8: the uniform. This is an extra data outside of the vertices attributes, which is accessible in memory by the shader. This allows us to transmit parameters for example to our shader to change its behavior. We will see its concrete application later. So here we give the address where the fragment shader can read the uniforms one after the other in a list.

Vertex shader

We arrive at the configuration of the vertex shader with bytes 12 to 23 in green.
Byte 14 indicates which attribute array is associated with the vertex shader. We will set the value 1 to point to the parameters in bytes 36 to 43 in green.

Byte 15 is the size in bytes of the shaded vertex with varyings, which the vertex shader writes to the VPM (via the VPM_WRITE register). As we have seen, this is the PSE format of Figure 3. In Figure 3 on the left, we use 3 varyings. This is: Ys (2 bytes) + Xs (2 bytes) + Zs (4 bytes) + 1 / Wc (4 bytes) + RGB (4 * 3 bytes), or 24 bytes.

Then come the address of the vertex shader code, as well as the address of the uniforms.

We continue with bytes 36 to 43 in green.
We indicate in position 36, the address of all the vertices with their attributes.
Byte 40 defines the total size of attributes (-1) for a vertex only. In Figure 6 below, we have the attributes X, Y, Z, W, R, G, B, ie 4 * 7 = 28 bytes. The memory stride is usually the same unless there is memory space between the vertices.
In our example below, we obtain: byte 40 = 27, and byte 41 = 28. The vertex shader will read the VPM_READ register 7 times to retrieve the attributes of a vertex via the VCM.

Vertices attributes
Figure 6 : Vertices attributes

Coordinate shader

Are you ok ? You are still here ? Good news for the coordinate shader is similar to the vertex shader.
The parameters are in bytes 24 to 35, and 44 to 51 in blue.

Some differences. In byte 26, the value 2 is set to point to the attribute array [1] at byte 44.
The coordinate shader writes the data in the VPM in PTB format. So we indicate in byte 27 the size of shaded vertex (without clipping) as in Figure 4. Let Ys (2 bytes) + Xs (2 bytes) + Zs (4 bytes) + 1 / Wc (4 bytes) = 12 bytes.

The coordinate shader will only be given the X, Y, Z and W attributes. So we will have a stride of 4 * 4 = 12 bytes. Let byte 41 = 12, and byte 40 = 11.

This is what gives the coding level of our GL Shader State Record :

struc GL_Shader_State_Record		; align 16
{
	db 0							; 0-1 	: flag bits
	db 0
	db 0							; 2 : Fragment Shader Number of Uniforms (not used currently)
	db 3							; 3 : Fragment Shader Number of Varyings
	.fragmentShaderCode	dw 0	; 4-7 : Fragment Shader Code Address		
	.fragmentUniformData	dw 0	; 8–11 : Fragment Shader Uniforms Address
	
	; shaded vertex PSE : Ys-Xs, Zs, 1/Wc, R, G, B
	dh 0							; 12–13 : Vertex Shader Number of Uniforms (not used currently)
	db 1							; 14 	: Vertex Shader Attribute Array select bits (8 bits for 8 attribute arrays)
	db 6 * 4						; 15 	: Vertex Shader Total Attributes Size
	.vertexShaderCode 	dw 0 	; 16–19 : Vertex Shader Code Address
	.vertexUniformData		dw 0	; 20–23 : Vertex Shader Uniforms Address
	
	; shaded vertex PTB : Ys-Xs, Zs, 1/Wc
	dh 0							; 24–25 : Coordinate Shader Number of Uniforms (not used currently)
	db 2							; 26 	: Coordinate Shader Attribute Array select bits (8 bits for 8 attribute arrays)
	db 3 * 4						; 27 	: Coordinate Shader Total Attributes Size
	.coordinateShaderCode	dw 0	; 28–31 : Coordinate Shader Code Address
	.coordinateUniformData 	dw 0	; 32–35 : Coordinate Shader Uniforms Address

	; vertex attributes (dw) : X, Y, Z, W, R, G, B
	.VertexShaderData 		dw 0	; 36–39 + n*8 : Attribute Array [n] Base Memory Address (n = 0-7)
	db 7 * 4 - 1					; 40 + n*8 : Attribute Array [n] Number of Bytes-1
	db 7 * 4						; 41 + n*8 : Attribute Array [n] Memory Stride
	db 0							; 42 + n*8 : Attribute Array [n] Vertex Shader VPM Offset (from Base Address)
	db 0							; 43 + n*8 : Attribute Array [n] Coordinate Shader VPM Offset (from Base Address)

	; vertex attributes (dw) : X, Y, Z, W
	.CoordinateShaderData 	dw 0	; 36–39 + n*8 : Attribute Array [n] Base Memory Address (n = 0-7)
	db 4 * 4 - 1					; 40 + n*8 : Attribute Array [n] Number of Bytes-1
	db 4 * 4						; 41 + n*8 : Attribute Array [n] Memory Stride
	db 0							; 42 + n*8 : Attribute Array [n] Vertex Shader VPM Offset (from Base Address)
	db 0							; 43 + n*8 : Attribute Array [n] Coordinate Shader VPM Offset (from Base Address)

	times 12 dw 0

	dw 0						; 100–103 + n*4	Extended Attribute Array [n] Memory Stride (optional)
}
virtual at 0
	oGLShaderState GL_Shader_State_Record
	sizeof.GLShaderState = $ - oGLShaderState  
end virtual

align 16
glShaderState			GL_Shader_State_Record
	

4. Our first rectangle

Let's move to a concrete implementation : display a watermark rectangle. To forget nothing, we will review all the necessary steps.

4.1 vertices attributes

As seen in GL Shader State Record, we need to create the code for the shaders, as well as the vertices data.

In OpenGL a rectangle is displayed thanks to two triangles. As we use the indexed mode, we have 4 vertices to define, as in figure 7 below:

Vertices rectangle
Figure 6 : Vertices of a rectangle

We obtain the following vertices:

struc Vertex_Shader_Data
{
	; Vertex 0
	dw -1.0			; X
	dw 0.5			; Y	
	dw 1.0			; Z	
	dw 1.0			; 1 / W
	
	; Vertex 1
	dw 1.0			; X
	dw 0.5			; Y	
	dw 1.0			; Z	
	dw 1.0			; 1 / W
	
	; Vertex 2
	dw 1.0			; X
	dw -0.5			; Y	
	dw 1.0			; Z	
	dw 1.0			; 1 / W
	
	; Vertex 3
	dw -1.0			; X
	dw -0.5			; Y	
	dw 1.0			; Z	
	dw 1.0			; 1 / W
}

struc data_Indices_List
{
	db 0, 1, 3		; triangle 1
	db 1, 2, 3		; triangle 2
}

align 16
VertexShaderData		Vertex_Shader_Data
align 16
VextexIndicesList		data_Indices_List
	

4.2 vertex shader code

To be able to program the shaders, you must know how to encode the instructions in binary. I will spare you this ;-), and I will give instructions directly in assembler with the associated hexadecimal code.

Let's see what the vertex shader should do. He must read the attributes of the vertex, adapt them to the screen, and write them in the VPM in PSE format. We must see that the vertex shader is executed as many times as there are vertices.

We start by configuring the VPM by indicating how many attributes we must read for a vertex.

; setup VPM with 4 attributes to read in VCM
dw 0x00401A00, 0xE0020C67
	

Then we read all the attributes.

; nop ; mov r0a, vpm_read			; r0a = read X (float)
dw 0x15C27DF7, 0x10020027

; nop ; mov r1a, vpm_read			; r1a = read Y (float)
dw 0x15C27DF7, 0x10020067

; nop ; mov r2a, vpm_read			; r2a = read Z (float)
dw 0x15C27DF7, 0x100200A7

; nop ; mov r3a, vpm_read			; r3a = read 1/W (float)
dw 0x15C27DF7, 0x100200E7
	

The X and Y coordinates are set to float in the range [-1.0, 1.0]. You have to adapt them to the screen and convert them into pixels.
We first perform a magnification by multiplying them by a factor. It is calculated with the following formula: factor = width in pixels * 16/2. For example, for our rectangle to make the third of the screen in width (on a screen of 1440 x 900), we choose a factor of 1440/3 * 16/2 = 3840.

The question that can be asked now is how to give this factor to the shader ? Especially if you have to change it for example to zoom. Well with reading a uniform.

Here is the corresponding code :

; nop ; mov acc1, uniform			; acc1 = factor
dw 0x15827DF7, 0x10020867

; nop ; mov acc2, uniform			; acc2 = -factor
dw 0x15827DF7, 0x100208A7

; nop ; fmul acc1, r0a, acc1			; acc1 = X * factor
dw 0x20027031, 0x100049E1

; nop ; fmul acc2, r1a, acc2			; acc2 = Y * -factor
dw 0x20067032, 0x100049E2	
	

And creating the list of uniforms in memory :

struc Vertex_Uniform_Data
{
	.factor:	dw 3840.0		; factor
	.factorn:	dw -3840.0		; -factor

	.originX:	dh 1440/2 * 16		; origin X in 12.4 fixed point
	.originY:	dh 900/2 * 16		; origin Y in 12.4 fixed point
}
virtual at 0
	oVertUni Vertex_Uniform_Data
end virtual

align 16
vertexUniformData		Vertex_Uniform_Data
	

In the PSE format, the X and Y coordinates are in 12.4 fixed point format. We therefore perform a conversion. Then we position our rectangle in the center of the screen by adding an origin X and Y. Again these original coordinates are recovered via a uniform in 2x 12.4 fixed point (dh-dh).

; nop ; ftoi r0a.16a, acc1, acc1 		; convert X to 12.4 fixed point
dw 0x079E7240, 0x10120027

; nop ; ftoi r0a.16b, acc2, acc2 		; convert Y to 12.4 fixed point
dw 0x079E7480, 0x10220027

; mov acc0,uni						; acc0 = uniform origin X et Y (12.4 x2)
dw 0x15827DF7, 0x10020827

; r0a = r0a + acc0					; add origin to vertices :
dw 0x0C027C37, 0x10020027		; X = X + origin_X	; Y = -Y + origin_Y
	

The last step is to write in the VPM the shaded vertices, which we have just calculated. And as we saw in the PSE format: Ys-Xz, Zs, 1 / Wc. A VPM configuration for writing is necessary beforehand. Here is the code:

; setup VPM to write 
dw 0x00001a00, 0xE0021C67	

; mov vpm_write, r0a				; write screen Xs-Ys (12.4 x2)
dw 0x15027DF7, 0x10020C27

; mov vpm_write, r2a				; write Zs
dw 0x150A7DF7, 0x10020C27	

; mov vpm_write, r3a				; write 1/W
dw 0x150E7DF7, 0x10020C27
	

For proper operation of the shader, a processing time is required at the end. We add the following instructions:

; thread end
dw 0x9E7000, 0x300009E7

; branch delay 1
dw 0x9E7000, 0x100009E7

; branch delay 2
dw 0x9E7000, 0x100009E7
	

That's it, we just created our first vertex shader !

4.3 coordinate shader code

Well, it's the turn of the coordinate shader ... But good news, the code is exactly the same. Because the expected output format is the PTB, and this corresponds to the PSE format because there are no varyings :-)
In addition we use the same list of uniforms as the vertex shader.

4.4 fragment shader code

The code of our fragment shader is simple. This is to display the pixel in green. The color is provided directly in the code with an immediate value. No data to read as input.
Here is the code :

struc Fragment_Shader_Code
{
	dw 0x009E7000, 0x100009E7		; nop	
	
	dw 0xFF00FF00, 0xE0020BA7		; ldi tlbc, ARGB green
	
	dw 0x009E7000, 0x500009E7		; scoreboard unlock
	
	dw 0x009E7000, 0x300009E7		; thread end

	dw 0x009E7000, 0x100009E7		; branch delay 1

	dw 0x009E7000, 0x100009E7		; branch delay 2
}

align 16
fragmentShaderCode		Fragment_Shader_Code
	

4.5 GL Shader State Record

We have just coded all the shaders. We must now update the GL shader state record. It is similar to the one we saw. I show you only the changes to be made :

	db 0							; 3 : Fragment Shader Number of Varyings
	
	; shaded vertex PSE : Ys-Xs, Zs, 1/Wc
	db 3 * 4						; 15 	: Vertex Shader Total Attributes Size

	; vertex attributes (dw) : X, Y, Z, W
	db 4 * 4 - 1					; 40 + n*8 : Attribute Array [n] Number of Bytes-1
	db 4 * 4						; 41 + n*8 : Attribute Array [n] Memory Stride
	

It is also necessary to update the addresses of the shaders with the structures that one created previously. GPU memory placement is preferred to improve display speed.

4.6 The last step

It's nice all that, but where is the rectangle ;-) It's coming. As we saw in the previous posts, there are different programs on the cpu ARM side to launch

I will only mention those that need to be changed.

In the v3dBinnerPrep (preparation of the control list of Binner), one indicates that one draws two triangles in indexed mode:

; vertex configuration (Indexed_Primitive_List)

mov r4,v3dCLBin_A						; pointer to GPU memory
ldr r4,[r4]
mov r0,Mode_Line_Loop + Index_Type_8
strb r0,[r4,oVCLBin.vertex.data]
	
mov r0,3 * 2								; nb indices = 3 vertices * nb triangles
add r1,r4,oVCLBin.vertex.length
strNotAlign32 r0,r1

imm32 r0,VextexIndicesList - glShaderState		; Address of Indices List
add r0,r5	
add r1,r4,oVCLBin.vertex.address
strNotAlign32 r0,r1

mov r0,3 * 2											
add r1,r4,oVCLBin.vertex.maxindex
strNotAlign32 r0,r1
	

The v3dBinnerRun and v3dRenderRun programs must be modified to clear the cache and allow the vertices to be renewed. Here are the changes :

v3dBinnerRun:

	mov r4,PERIPHERAL_BASE + V3D_BASE	

	mov r0,L2CCLR + L2CDIS		; clear cache L2
	str r0,[r4,V3D_L2CACTL]
			
	ldr r0,[r4,V3D_SLCACTL]		; clear uniform, instruction, texture caches 
	orr r0,UCCS0_to_UCCS3
	orr r0,ICCS0_to_ICCS3
	orr r0,T0CCS0_to_T0CCS3
	orr r0,T1CCS0_to_T1CCS3
	str r0,[r4,V3D_SLCACTL]
	
	mov r0,0x20					; stop the thread
	str r0,[r4,V3D_CT0CS]
	
	.DO2:						; wait for it to stop
		ldr r0,[r4,V3D_CT0CS]
		tst r0,0x20
	bne .DO2
						
	mov r0,1
	str r0,[r4,V3D_BFC]			; reset flush counter	

								; thread 0 configuration
	ldr r0,[v3dCLBin_G]  			; pointer to control list
	str r0,[r4,V3D_CT0CA]	
	add r0,sizeof.VCLBin
	str r0,[r4,V3D_CT0EA]			; thread execution
						
								
	.DO1:						; waiting for thread to finish
		ldr r0,[r4,V3D_BFC]		; flush counter
		tst r0,1					; test if the PTB emptied all lists of tiles in memory
	beq .DO1
	
v3dRenderRun:

	mov r4,PERIPHERAL_BASE + V3D_BASE
	
	mov r0,0x20					; stop the thread
	str r0,[r4,V3D_CT1CS]
	
	.DO2:						; wait for it to stop
		ldr r0,[r4,V3D_CT1CS]
		tst r0,0x20
	bne .DO2

	mov r0,1						; reset frame counter
	str r0,[r4,V3D_RFC]				
	
	ldr r0,[v3dCLRen_G]			; pointer to control list
	str r0,[r4,V3D_CT1CA]
	mov r1,v3dAddrEnd
	ldr r0,[r1]						; end address of the control list
	orr r0,0xc0000000				; conversion to GPU address
	str r0,[r4,V3D_CT1EA]			; thread execution

	.DO1:						; waiting for thread to finish
		ldr r0,[r4,V3D_RFC]		; flush counter
		tst r0,1					; test if the last tile storage operation is complete
	beq .DO1
	

It's finish. We get after so much work that :

Green rectangle

All that for this ! Yes it is only the beginning, and the indispensable base. We will improve the code.

5. A colored rectangle

To color our rectangle we will have to modify our fragment shader, and provide it with RGB colors. These colors will be made available in the attributes of the vertices. It is the vertex shader that will be responsible for transmitting colors via varyings.

5.1 Update vertices

So first step updating vertices with RGB colors.

struc Vertex_Shader_Data
{
	; Vertex 0
	dw -1.0			; X
	dw 0.5			; Y	
	dw 1.0			; Z	
	dw 1.0			; 1 / W
	dw 1.0			; R
	dw 0.0			; G
	dw 0.0			; B
	
	; Vertex 1
	dw 1.0			; X
	dw 0.5			; Y	
	dw 1.0			; Z	
	dw 1.0			; 1 / W
	dw 0.0			; R
	dw 0.0			; G
	dw 1.0			; B		
	
	; Vertex 2
	dw 1.0			; X
	dw -0.5			; Y	
	dw 1.0			; Z	
	dw 1.0			; 1 / W
	dw 0.0			; R
	dw 1.0			; G
	dw 0.0			; B	
	
	; Vertex 3
	dw -1.0			; X
	dw -0.5			; Y	
	dw 1.0			; Z	
	dw 1.0			; 1 / W
	dw 1.0			; R
	dw 1.0			; G
	dw 0.0			; B	
}
	

5.2 vertex shader code

Our vertex shader code must now read 3 additional attributes. We configure the VPM accordingly, and we add three vpm_read :

; setup VPM with 7 attributes to read in VCM
dw 0x00701A00, 0xE0024C67

; nop ; mov r0a, vpm_read			; r0a = read X (float)
dw 0x15C27DF7, 0x10020027

; nop ; mov r1a, vpm_read			; r1a = read Y (float)
dw 0x15C27DF7, 0x10020067

; nop ; mov r2a, vpm_read			; r2a = read Z (float)
dw 0x15C27DF7, 0x100200A7

; nop ; mov r3a, vpm_read			; r3a = read 1/W (float)
dw 0x15C27DF7, 0x100200E7

; nop ; mov r5a, vpm_read			; r5a = read R (float)
dw 0x15C27DF7, 0x10020167

; nop ; mov r6a, vpm_read			; r6a = read G (float)
dw 0x15C27DF7, 0x100201A7

; nop ; mov r7a, vpm_read			; r7a = read B (float)
dw 0x15C27DF7, 0x100201E7
	

In output we add the three varyings for the PSE format. The varyings are not modified by the code.

	
; setup VPM to write 
dw 0x00001a00, 0xE0021C67	

; mov vpm_write, r0a			; write screen Xs-Ys (12.4 x2)
dw 0x15027DF7, 0x10020C27

; mov vpm_write, r2a			; write Zs
dw 0x150A7DF7, 0x10020C27	

; mov vpm_write, r3a			; write 1/W
dw 0x150E7DF7, 0x10020C27

; mov vpm_write, r5a			; write R
dw 0x15167DF7, 0x10020C27

; mov vpm_write, r6a			; write G
dw 0x151A7DF7, 0x10020C27

; mov vpm_write, r7a			; write B
dw 0x151E7DF7, 0x10020C27
	

5.3 coordinate shader code

The coordinate shader code is not changed.

5.4 fragment shader code

Our fragment shader code is little more complicated. We have to recover the varyings, and interpolate the colors between the vertices. Some of the registers are automatically updated when the varyings are read, as shown on page 51 of the Architecture Reference Guide.

	
; W = r15a
; C = acc5

; fadd acc0, r15a, varying_read		; acc0 = W * Red ; update C
dw 0x203E3DF7, 0x110059E0

; fadd acc0, acc0, acc5				; acc0 = W * Red + C
; fmul acc1, r15a, varying_read		; acc1 = W * Green ; update C		
dw 0x213E3177, 0x11024821			 

; fadd acc1, acc1, acc5				; acc1 = W * Green + C
; fmul acc2, r15a, varying_read		; acc2 = W * Blue ; update C		
dw 0x213E3377, 0x11024862		

; fadd acc2, acc2, acc5				; acc2 = W * Blue + C
dw 0x013E3577, 0x110208A7
	

The interpolated RGB colors are then packaged in the accumulator acc0 for writing to the TLB.

	
; fmul acc0, acc0, 1.0 ; 32 ->8c		; convert R to 8-bit color in range [0, 1.0] 
dw 0x20820000, 0xD16059E0

; fmul acc0, acc1, 1.0 ; 32 ->8b		; convert G to 8-bit color in range [0, 1.0] 
dw 0x20820249, 0xD15059E0

; fmul acc0, acc2, 1.0 ; 32 ->8a		; convert B to 8-bit color in range [0, 1.0] 
dw 0x20820492, 0xD14059E0

; fadd tlbc, acc0, 0.0				; copy pixel color to TLB_COLOUR_ALL
dw 0x150001C7, 0xD1020BA7

dw 0x009E7000, 0x500009E7		; scoreboard unlock

dw 0x009E7000, 0x300009E7		; thread end

dw 0x009E7000, 0x100009E7		; branch delay 1

dw 0x009E7000, 0x100009E7		; branch delay 2
	

5.5 GL Shader State Record

We are progressing well. As it should, we do not forget to update the GL Shader State Record:

db 3							; 3 : Fragment Shader Number of Varyings

; shaded vertex PSE : Ys-Xs, Zs, 1/Wc, R, G, B
db 6 * 4						; 15 	: Vertex Shader Total Attributes Size

; vertex attributes (dw) : X, Y, Z, W, R, G, B
db 7 * 4 - 1					; 40 + n*8 : Attribute Array [n] Number of Bytes-1
db 7 * 4						; 41 + n*8 : Attribute Array [n] Memory Stride
	

5.6 And to finish : v3dBinnerPrep

In the preparation of our binner, we specify that we want full triangles. So we change the primitive Mode_Line_Loop by Mode_Triangles.

Here's the result :

Rainbow rectangle

6. zoom in and move the rectangle

6.1 zoom

We will now test one of the advantages of the shaders : that is to say the input parameters like the uniforms. So in our example, by modifying the factor we can zoom the rectangle. Then just restart the display of the binner and render.

Here is the code ARM side to enlarge and decrease the rectangle in a loop :

mov r0,v3dShader_A			; pointer on uniforms
ldr r0,[r0]
mov r4,vertexUniformData - glShaderState
add r4,r0				

vldr s2,[FLOAT_8_0]			; counters
mov r5,0
	
.DO1:
	vldr s0,[r4,oVertUni.factor]	; update of the factor via the uniforms
	vldr s1,[r4,oVertUni.factorn]		
	vadd.f32 s0,s2
	vsub.f32 s1,s2		
	vstr s0,[r4,oVertUni.factor]
	vstr s1,[r4,oVertUni.factorn]
			
	bl v3dBinnerRun			; display			
	bl v3dRenderRun	

	add r5,1
	
	mov r0,1440				; max width rectangle		
	cmp r5,r0
		moveq r5,0
		vnegeq.f32 s2,s2		; reversing the direction of zoom
b .DO1
	

And that's what it gives:

6.2 move

And if we moved the rectangle ? Come on, I'll let you guess, how are we doing ?
Yes with the uniforms. This time we will change the origin of the rectangle on the screen : the originX and originY parameters.

A small loop with screen edge detection, and voila :

mov r5,16		; one pixel for moving on X
mov r6,16		; one pixel for moving on Y

.DO1:
	ldrh r0,[r4,oVertUni.originX]
	add r0,r5
	cmp r0,1440*5/6 * 16		; right edge reaches
		rsbge r5,0			; change of direction : r5 = -r5
	cmp r0,1440/6 * 16			; left edge reaches
		rsbeq r5,0			; change of direction : r5 = -r5
	strh r0,[r4,oVertUni.originX]	; update uniform

	ldrh r0,[r4,oVertUni.originY]
	add r0,r6
	cmp r0,780 * 16			; top edge reaches
		rsbge r6,0			; change of direction : r6 = -r6
	cmp r0,116 * 16			; bottom edge reaches
		rsbeq r6,0			; change of direction : r6 = -r6
	strh r0,[r4,oVertUni.originY]	; update uniform
			
	bl v3dBinnerRun			; display					
	bl v3dRenderRun	
b .DO1
	

7. a rectangle with texture

Come on, if we changed the appearance of our rectangle a little by putting a texture instead of colors. This operation will require the update of the vertices, and the reprogramming of all our shaders.

A small explanation on mapping textures is needed. Take the logo of our beloved Raspberry Pi ;-)

Texture mapping
Figure 7 : Texture mapping

The coordinates of a texture are defined from 0.0 to 1.0, which is different from the coordinates of the vertices that are expressed from -1.0 to 1.0. For textures it is customary to express the coordinates by U and V. In the case of the programming of the 2D texture in the Videocore we speak of the registers S and T. For summary we have: X = U = S, and Y = V = T.

The bottom left corner of the texture is defined by S = 0.0, and T = 0.0. The opposite corner at the top right is S = 1.0, and T = 1.0.

Our rectangular shape is defined by two triangles, and therefore 4 vertices. We will map the texture on the rectangle, each corner of the texture will be associated with a vertex. This has the effect of spreading the texture over the entire surface of the rectangle.

7.1 Update vertices

By following this logic we can already redefine our vertices :

struc Vertex_Shader_Data
{
	; Vertex 0
	dw -1.0			; X
	dw 0.5			; Y	
	dw 1.0			; Z	
	dw 1.0			; 1 / W
	dw 0.0			; S
	dw 1.0			; T
	
	; Vertex 1
	dw 1.0			; X
	dw 0.5			; Y	
	dw 1.0			; Z	
	dw 1.0			; 1 / W
	dw 1.0			; S
	dw 1.0			; T
	
	; Vertex 2
	dw 1.0			; X
	dw -0.5			; Y	
	dw 1.0			; Z	
	dw 1.0			; 1 / W
	dw 1.0			; S
	dw 0.0			; T
	
	; Vertex 3
	dw -1.0			; X
	dw -0.5			; Y	
	dw 1.0			; Z	
	dw 1.0			; 1 / W
	dw 0.0			; S
	dw 0.0			; T
}
	

7.2 vertex shader code

We now move to modify our vertex shader. This one must read the X, Y, Z, W coordinates as usual, but also the S and T attributes. This gives us 6 attributes to define for the VPM. And so 6 instruction VPM_READ to encode.
In output for the PSE format, we have 5 shaded vertices to write. The rest of the code does not change.

; setup VPM with 6 attributes to read in VCM
dw 0x00601A00, 0xE0020C67

; nop ; mov r0a, vpm_read			; r0a = read X (float)
dw 0x15C27DF7, 0x10020027

; nop ; mov r1a, vpm_read			; r1a = read Y (float)
dw 0x15C27DF7, 0x10020067

; nop ; mov r2a, vpm_read			; r2a = read Z (float)
dw 0x15C27DF7, 0x100200A7

; nop ; mov r3a, vpm_read			; r3a = read 1/W (float)
dw 0x15C27DF7, 0x100200E7

; nop ; mov r5a, vpm_read			; r5a = read S (float)
dw 0x15C27DF7, 0x10020167

; nop ; mov r6a, vpm_read			; r6a = read T (float)
dw 0x15C27DF7, 0x100201A7
	

		
; setup VPM to write 
dw 0x00001a00, 0xE0021C67	

; mov vpm_write, r0a				; write screen Xs-Ys (12.4 x2)
dw 0x15027DF7, 0x10020C27

; mov vpm_write, r2a				; write Zs
dw 0x150A7DF7, 0x10020C27	

; mov vpm_write, r3a				; write 1/W
dw 0x150E7DF7, 0x10020C27

; mov vpm_write, r5a				; write S
dw 0x15167DF7, 0x10020C27

; mov vpm_write, r6a				; write T
dw 0x151A7DF7, 0x10020C27
	

7.3 coordinate shader code

The coordinate shader code is not changed.

7.4 fragment shader code

Again it is the fragment shader code that needs to be reviewed completely.
We start by reading the varyings and thus recover the S and T coordinates of the texture. After interpolation, the T and S registers are written in the TMU texture unit. This is the writing of the S register, which triggers the processing by the TMU.
It is then possible to recover the pixel of the texture corresponding to the coordinates S and T in the accumulator acc4, and then to write it to the tile buffer.

This process may seem a little mysterious, so I invite you to read the section 4 : Texture and Memory Lookup Unit of Architecture Reference Guide

; W = r15a
; C = acc5
	
; fmul acc0, r15a, varying_read		; acc0 = W * S ; update C
dw 0x203E3DF7, 0x100059E0	

; fadd acc0, acc0, acc5				; acc0 = W * S + C
; fmul acc1, r15a, varying_read		; acc1 = W * T ; update C
dw 0x213E3177, 0x10024821

; fadd tmu0_T, acc1, acc5			; TMU0_T = W * T + C, reading the first uniform
dw 0x013E3377, 0x10020E67

; mov tmu0_S, acc0, acc5			; TMU0_S = W * S + C, reading the second uniform
dw 0x159E7000, 0x10020E27

; signal TMU texture read			; load data from TMU0 to acc4
dw 0x009E7000, 0xA00009E7

; orr tlb_colour_all, acc4, 0.0			; copy the texture pixel in Tile Buffer
dw 0x150009c7, 0xD0020BA7

dw 0x009E7000, 0x500009E7		; scoreboard unlock

dw 0x009E7000, 0x300009E7		; thread end

dw 0x009E7000, 0x100009E7		; branch delay 1

dw 0x009E7000, 0x100009E7		; branch delay 2
	

If you read the code well, and I do not doubt it ;-) You may have noticed two readings of uniform ? These readings are automatic when executing the write instructions of the TMU0_T and TMU0_S registers. This is how the TMU is configured.
Since this is the shader fragment, both uniforms are located at the address indicated in the GL Shader State Record at byte 8. In the previous post Videocore : textured cube in rotation, we have seen how to set the texture unit in chapter 4. We must configure the same parameters.

One difference however : 2D texture mode is used with S and T coordinates only. And here is the configuration of the uniforms for our fragment shader:

struc Fragment_Uniform_Data
{
	Tex_Config_Param0 TEX_T_FMT_ADDR, 0, TEX_MODE_2D, TEX_FLIP_Y_OFF, TEX_DATA_FORMAT_RGBA32R, 0

	Tex_Config_Param1 TEX_DATA_FORMAT_RGBA32R, TEX_T_FMT_HEIGHT, TEX_ETC_FLIP_OFF, TEX_T_FMT_WIDTH, TEX_MAXFILT_LINEAR, TEX_MINFILT_LINEAR, TEX_WRAP_MODE_CLAMP, TEX_WRAP_MODE_CLAMP
}
align 16
fragmentUniformData		Fragment_Uniform_Data
	

7.5 GL Shader State Record

As usual, we update the GL Shader State Record. In particular with the setting of the address of our two uniforms to byte 8.

db 2							; 3 : Fragment Shader Number of Varyings

dw fragmentUniformData		;  8–11 : Fragment Shader Uniforms Address

; shaded vertex PSE : Ys-Xs, Zs, 1/Wc, S, T
db 5 * 4						; 15 	: Vertex Shader Total Attributes Size

; vertex attributes (dw) : X, Y, Z, W, S, T
db 6 * 4 - 1					; 40 + n*8 : Attribute Array [n] Number of Bytes-1
db 6 * 4						; 41 + n*8 : Attribute Array [n] Memory Stride
	

7.6 What is it ?

Go by combining some modifications of the zoom and origin parameters in a loop, we can do this:

8. 3D rotation

And if we went now to 3D. So far we have only moved and zoomed a rectangle on a 2D flat surface. We will keep our textured rectangle and rotate it around the X axis. For this, we must take into account the depth, and play on the X, Y and Z coordinates.

As we saw in the previous post, we must apply mathematical formulas to obtain the coordinates in 3D. Our goal is to do the calculations directly in the shaders to optimize processing. On the ARM cpu side we will only perform the cosine and sine calculations of the angle that we will provide to the shaders via uniforms.

As a reminder here are the formulas that we must encode in our shaders :

  • Angle on the X axis :
  • Angle on the Y axis :
    • z = z * cos(angle) - x * sin(angle)
      x = z * sin(angle) + x * cos(angle)
  • Angle on the Z axis :
    • x = x * cos(angle) - y * sin(angle)
      y = x * sin(angle) + y * cos(angle)

    After this compute is applied to the X, Y and Z coordinates, we have to put into perspective according to this formula: factor = fov + (distance * Z). The distance and fov parameters are provided via two uniforms.

    The last step of the coding is to update Z in the interval [0.0, 1.0].

    You have all the steps to code. You are ready ? Let's go.

    8.1 vertex uniform

    First step : update the list of uniforms for our vertex shader. We must add the cosines, and sine of the angles of the 3 axes, then the parameters distance and fov, then coefficients to readjust the depth Z.

    struc Vertex_Uniform_Data
    {
    	dw 0.0					; cos angle on X axis
    	dw 1.0					; sin angle on X axis
    	
    	dw 0.0					; cos angle on Z axis
    	dw 1.0					; sin angle on Z axis
    	
    	dw 0.0					; cos angle on Y axis
    	dw 1.0					; sin angle on Y axis
    		
    	dw 500.0					; distance	
    	dw 3840.0				; fov
    	dw -3840.0				; -fov
    	dw 2.0					; maj Z
    	dw 0.25					; maj Z (1/4)	
    
    	.originX:	dh 1440/2 * 16		; origin X in 12.4 fixed point
    	.originY:	dh 900/2 * 16		; origin Y in 12.4 fixed point
    }	
    	

    8.2 vertex shader code

    The vertex shader code for reading the X, Y, Z, W, S and T coordinates does not change. Two instructions are then added to read the uniforms and to recover the cosine and sine of the angle on the X axis.

    ; nop ; mov acc0, uniform		 	
    dw 0x15827DF7, 0x10020827		; acc0 = cos angle on X axis
    
    ; nop ; mov acc1, uniform		 	
    dw 0x15827DF7, 0x10020867		; acc1 = sin angle on X axis
    	

    Then the computes are carried out according to the preceding formulas. Below the code for the X axis:

    ; nop ; fmul acc2, r1a, acc0			; acc2 = Y * cos(angle)
    dw 0x20027030, 0x100049E2
    
    ; nop ; fmul acc3, r2a, acc1			; acc3 = Z * sin(angle)
    dw 0x200A7031, 0x100049E3
    	
    ; nop ; fsub r4a, acc2, acc3			; r4a = Y * cos(angle) - Z * sin(angle)
    dw 0x029E74F7, 0x10020127
    
    ; nop ; fmul acc2, r1a, acc1			; acc2 = Y * sin(angle)
    dw 0x20067031, 0x100049E2
    
    ; nop ; fmul acc3, r2a, acc0			; acc3 = Z * cos(angle)
    dw 0x200A7030, 0x100049E3	
    
    ; nop ; fadd r2a, acc2, acc3			; Z = Y * sin(angle) + Z * cos(angle)
    dw 0x019E74F7, 0x100200A7
    
    ; mov r1a, r4a						; Y = Y * cos(angle) - Z * sin(angle)
    dw 0x15127DF7, 0x10020067
    	

    For the Y and Z axes, the uniforms are also provided and the formulas are applied. Go as I am nice, I give them too ;-) Code for the Z axis:

    ; nop ; mov acc0, uniform		 	
    dw 0x15827DF7, 0x10020827		; acc0 = cos angle on Z axis
    
    ; nop ; mov acc1, uniform		 	
    dw 0x15827DF7, 0x10020867		; acc1 = sin angle on Z axis
    
    ; nop ; fmul acc2, r0a, acc0			; acc2 = X * cos(angle)
    dw 0x20027030, 0x100049E2
    
    ; nop ; fmul acc3, r1a, acc1			; acc3 = Y * sin(angle)
    dw 0x20067031, 0x100049E3	
    
    ; nop ; fsub r4a, acc2, acc3			; r4a = X * cos(angle) - Y * sin(angle)
    dw 0x029E74F7, 0x10020127
    
    ; nop ; fmul acc2, r0a, acc1			; acc2 = X * sin(angle)
    dw 0x20027031, 0x100049E2
    
    ; nop ; fmul acc3, r1a, acc0			; acc3 = Y * cos(angle)
    dw 0x20067030, 0x100049E3	
    
    ; nop ; fadd r1a, acc2, acc3			; Y = X * sin(angle) + Y * cos(angle)
    dw 0x019E74F7, 0x10020067
    
    ; mov r0a, r4a						; X = X * cos(angle) - Y * sin(angle)
    dw 0x15127DF7, 0x10020027
    	

    Code for the Y axis :

    ; nop ; mov acc0, uniform		 	
    dw 0x15827DF7, 0x10020827		; acc0 = cos angle on Y axis
    
    ; nop ; mov acc1, uniform		 	
    dw 0x15827DF7, 0x10020867		; acc1 = sin angle on Y axis
    
    ; nop ; fmul acc2, r2a, acc0			; acc2 = Z * cos(angle)
    dw 0x200A7030, 0x100049E2
    
    ; nop ; fmul acc3, r0a, acc1			; acc3 = X * sin(angle)
    dw 0x20027031, 0x100049E3	
    
    ; nop ; fsub r4a, acc2, acc3			; r4a = Z * cos(angle) - X * sin(angle)
    dw 0x029E74F7, 0x10020127
    
    ; nop ; fmul acc2, r2a, acc1			; acc2 = Z * sin(angle)
    dw 0x200A7031, 0x100049E2
    
    ; nop ; fmul acc3, r0a, acc0			; acc3 = X * cos(angle)
    dw 0x20027030, 0x100049E3	
    
    ; nop ; fadd r0a, acc2, acc3			; X = Z * sin(angle) + X * cos(angle)
    dw 0x019E74F7, 0x10020027
    
    ; mov r2a, r4a						; Z = Z * cos(angle) - X * sin(angle)
    dw 0x15127DF7, 0x100200A7
    	

    I want to clarify, that the order of application of the formulas for the 3 axes to its importance. And that can completely change the result on the screen !

    We then retrieve the distance and fov parameters, which we apply according to the formula factor = fov + (distance * Z). The factor is thus updated, and will be applied in the following code, as we saw previously in chapter 4.2.

    ; nop ; mov acc0, uniform			; acc0 = distance
    dw 0x15827DF7, 0x10020827
    	
    ; nop ; mov acc1, uniform			; acc1 = fov
    dw 0x15827DF7, 0x10020867
    
    ; nop ; mov acc2, uniform			; acc2 = -fov
    dw 0x15827DF7, 0x100208A7
    
    ; acc0 = r2a * acc0					; acc0 = Z * distance
    dw 0x200A7030, 0x100049E0
    
    ; fadd acc1,acc1,acc0				; factor = fov + (Z * distance)
    dw 0x019E7237, 0x10020867
    
    ; fsub acc2,acc2,acc0				; -factor = -fov - (Z * distance)
    dw 0x029E7437, 0x100208A7
    	

    Finally, we must readjust the Zs parameter in the interval [0.0, 1.0] for the Videocore. Because with the previous calculations, it can be negative and greater than 1.

    ; nop ; mov acc0, uniform			; acc0 = 2.0
    dw 0x15827DF7, 0x10020827
    
    ; fadd r2a,r2a,acc0					; Z = Z + 2.0
    dw 0x010A7C37, 0x100200A7
    
    ; nop ; mov acc0, uniform			; acc0 = 1 / 4.0
    dw 0x15827DF7, 0x10020827
    	
    ; fmul r2a,r2a,acc0					; Z = Z * 1/4.0
    dw 0x200A7030, 0x100059C2
    	

    8.3 coordinate shader code

    If after all this, you did not give up, I congratulate you ;-)

    In any case, good news, for the coordinate shader code we do exactly the same thing. That is to say we add the code we just saw without touching the reading codes attributes and writing shaded vertex.

    8.4 fragment shader code

    No change to the shader code fragment, because we always display our rectangle with the Raspberry Pi logo.

    8.5 GL Shader State Record

    Since the number of vertices has not changed, the number of attributes vertices, and shaded vertex either. Therefore, the GL shader state record does not change.

    8.6 code on the ARM cpu side

    With a very small program CPU ARM side, we will perform a modification of the cosine and sine of the angle to rotate our rectangle around the X axis.

    Here is the code:

    	
    .DO1:
    	mov r0,r6
    	bl sincos
    	; update vertex uniforms			
    	vstr s1,[r4,8]		; cos(angle)
    	vstr s0,[r4,12]		; sin(angle)		
    
    	; display
    	bl v3dBinnerRun						
    	bl v3dRenderRun	
    	
    	; tempo
    	imm32 r0,10000
    	bl sysTimer1Wait
    	
    	; next angle
    	add r6,1
    	mov r0,180
    	cmp r6,r0
    		moveq r6,-180		
    b .DO1
    	

    and the result, finally !

    9. A pyramid in 3D

    3D is nice, but with a 3D shape will be even better. To avoid having vertices we will take a pyramid. There are 5 vertices, and 6 triangles. Yes because 4 triangles for the 4 faces, and 2 triangles for the square below the pyramid.
    We can define the vertices like this :

    struc Vertex_Shader_Data
    {
    	; Vertex 0
    	dw 1.0			; X
    	dw -1.0			; Y	
    	dw -1.0			; Z	
    	dw 1.0			; 1 / W
    	dw 0.0			; R
    	dw 1.0			; G
    	dw 1.0			; B
    	
    	; Vertex 1
    	dw 1.0			; X
    	dw -1.0			; Y	
    	dw 1.0			; Z	
    	dw 1.0			; 1 / W
    	dw 0.0			; R
    	dw 0.0			; G
    	dw 1.0			; B
    	
    	; Vertex 2
    	dw -1.0			; X
    	dw -1.0			; Y	
    	dw 1.0			; Z	
    	dw 1.0			; 1 / W
    	dw 1.0			; R
    	dw 0.0			; G
    	dw 1.0			; B
    	
    	; Vertex 3
    	dw -1.0			; X
    	dw -1.0			; Y	
    	dw -1.0			; Z	
    	dw 1.0			; 1 / W
    	dw 1.0			; R
    	dw 1.0			; G
    	dw 0.0			; B
    	
    	; Vertex 4
    	dw 0.0			; X
    	dw 1.0			; Y	
    	dw 0.0			; Z	
    	dw 1.0			; 1 / W
    	dw 1.0			; R
    	dw 1.0			; G
    	dw 0.0			; B
    }
    	

    With the indexing of the vertices that corresponds :

    struc data_Indices_List
    {
    	db 1, 3, 0		; triangle 1
    	db 0, 4, 1		; triangle 2
    	db 1, 4, 2		; triangle 3
    	db 2, 4, 3		; triangle 4
    	db 4, 0, 3		; triangle 5
    	db 1, 2, 3		; triangle 6
    }
    	

    Our pyramid is defined with RGB colors. Therefore we will use the same shaders as chapter 5 taking into account the number of higher vertices.

    I'll let you guess the code this time !

    And here is what we can do:

    9. Spaceship

    Finally I made a demo, which you can copy to an SD card and run on your Raspberry Pi.
    It is interactive : you can use the arrow keys on the keyboard to rotate a spacecraft on 2 axes. Works on Raspberry Pi 2 and 3.

    For this, I added my USB module in the code for taking into account USB keyboards. It is still very simple, so for the moment only usb 1.1 keyboards are taken into account. I tested with a Microsoft Wired Keyboard 600 keyboard and a Dell KB1421 : it works.
    At the moment Raspberry Pi Keyboard does not work because it integrates an upstream USB Hub, and I do not handle the cascade USB for now.
    In addition you must use a specific USB port of your Raspberry Pi, and connect the keyboard before turning on your Pi electrically.
    Here is the usb port to use surrounded in red depending on the version of your Raspberry Pi :

    USB port to use

    10. Thanks

    I want to report this site JayStation2 Dev Blog which opened the way to programming in the meander of Videocore. The first time I discovered this site, I thought, how is it possible to program with hexadecimal code ?
    And then in the end after reading his site, and rereading many times, I finally understand, and debug my programs. Which allowed me to do this post. So if he reads me one day, I want to thank him.

    Another site I wanted to recommend to you https://github.com/LdB-ECM/Raspberry-Pi. I was inspired by the code made by Leon de Boer on these 3D rotations among others. Code in C language very pleasant to read and understand.

    Here I hope you enjoyed this post. If there are any questions, or if you want to comment, I'm here. See you soon.

    w1ll12520114
    2019 Jul 9 19:45

    Nice post, kudos for the acknowledgements :)

    Just wanted to say it would be nice of you to add an RSS feed (I recommend atom 1.0). It would save you lots of bandwidth since I visit the site manually every week or so.

    Lastly, I observed that when I do linebreaks in my comments, they dont appear when published. Apply the following function to your $message variable (or equivalent) inside your (I assume) "commentaires_post.php":

    <pre>
    $message = str_replace(["\r\n", "\r", "\n"], "<br>", $message);
    </pre>
    It will replace all possible linebreaks that users can create to "< b r >", which is a html linebreak thing.

    If you need help more with your website, please contact me via your form. Im pretty bored right now..

    Have a nice day, and again, nice post!
    w1ll12520114
    2019 Jul 9 19:46

    Well, I see you solved the issue afterall! Nevermind, then :)
    Aran (webmaster)
    2019 Jul 12 23:39

    Hello,
    very happy to see you visit my site regularly :-) You finally convinced me, I add the RSS as soon as the time allows me. In the meantime you can follow me on Twitter.
    Aran (webmaster)
    2019 Jul 14 10:59

    It's done : the RSS feed is in place. Much simpler than programming the videocore ;-)
    For those who want to read the RSS feed, you need a little software in the browser. For Chrome, I tested this one (to download from the Chrome Web Store) : "The RSS Aggregator".

    Add a comment

    Page : 1