I’m working on a small game with a procedural map. As the whole game is set in a labyrinth-like building I’ve decided to use tile maps. The map generation algorithm produces a 2D matrix with 0s and 1s. A 1 indicates that the tile is walkable while a 0 represents an impassable field. In a simple 2D setting this could be represented by using a floor sprite for each 1 and a wall sprite for each 0.
I’ve recently migrated my tools project from an old the server to a new one. In the process I’ve decided to change the used database from MySQL to PostgreSQL. To do so I’ve used this amazing guide from calazan.com. As the guide was written for django 1.6 some minor changes were necessary when working with newer versions (I’ve used django 3.1.4). Setup for migration Steps 1 + 2 work as described in the guide above.
Estimating development time is similar to the coastline paradox. The more detailed your project description gets, the longer you estimation will be. You start with “building a blog will take me 20 hours”, then go to “OK actually I will need 10 hours for the data structure, 10 hours for a way to create posts and 10 hours for the actual website”. For each of this estimations you will end up with more hours needed the further you break them down.
Open the web app in Chrome or Firefox and open the Developer Tools (F12 or Ctrl+Shift+I). Login and look for an API call in the Network Tab of the Developer Tools. Right click the request and select Copy -> Copy as cURL Open Postman and use the Import Button -> Paste Raw Text You can now execute the same request as done in the browser.
I’m currently helping to evaluating a large market research survey, which ueses Likert Scales. To visualize the data I’ve tried several plots. The plots below where created with artificially created data to expose the strengths and weaknesses of different plot types. Data distribution: You can find the code for all plots here! Bar Plot: Mean Values Pros: Easy to create Simple to read Q3 – Q5 distinguishable Cons: Hides a lot complexity Doesn’t show spread Creates high confidence in shown values Bar Plot: Mean values and Standard Deviation Extension of first plot.
I’ve recently run into a paradoxical situation while training a network to distinguish between to classes. I’ve used cross entropy as my loss of choice. On my training set the loss steadily decreased while the F1-Score improved. On the validation set the loss decreased shortly before increasing and leveling off around ~2, normally a clear sign for overfitting. However the F1-Score on the validation set kept rising and reached ~0.92, with similarly high presicion and recall.