by Vt71fcAqt7 on 10/19/2023, 6:31:21 PM
by wiml on 10/19/2023, 7:56:22 PM
If you really want to, there's also the `join` command, which has been part of the standard unix toolset for longer than many of us have been alive.
by smartmic on 10/19/2023, 8:29:18 PM
> a performant join that avoids the hassle of SQL tables.
Hm, a hassle is not always the same hassle for everyone. Personally, I like SQL and I enjoy the power it can leverage on all kind of data. As others wrote, working with sqlite for local data processing is a tool I do not want to miss, besides all of the great Unix coreutils and its (mostly GNU) ecosystem.
by Pxtl on 10/19/2023, 6:44:01 PM
Personally my weapon-of-choice fort his kind of thing is just raw Powershell. It means all your queries will be simple linear-searches, and powershell is a warty-as-hell language, but the ergonomics aren't bad.
For example, the "all the cities in Iran" query.
Import-FromCsv GeoLite2-City-Locations-en.csv |
Where-Object {$_.country_name -eq "Iran"} |
Select-Object -ExpandProperty city_name
You could probably find modules to help with the IP-aware stuff like `cidr_match`, but the real place where it would probably fall over probably performance when joining, since you'd probably be just be doing O(n*m) convolution operations.by jakjak123 on 10/19/2023, 6:29:47 PM
I have done some similar, simpler data wrangling with xsv (https://github.com/BurntSushi/xsv) and jq. It could process my 800M rows in a couple of minutes (plus the time to read it out from the database =)
by nerdponx on 10/19/2023, 6:31:02 PM
CLI data processing is always fun and cool. But it tends to also be limited in scope and practicality, not to mention performance if you're chaining operations between function calls and it needs to re-parse the data every time.
If you want to avoid SQL, it's really hard to beat a "data frame" data structure for tabular data processing including things like joins.
by flusteredBias on 10/19/2023, 6:42:44 PM
I do the same with DuckDB and pretty print with tidy-viewer.
by chaps on 10/19/2023, 6:55:36 PM
Ehhhhhhhh. It hasn't really made a super convincing argument not to use SQL. A lot of what is described isn't intinsicly a SQL problem, but instead an ETL problem. Eg, the use of complex types can be done by loading a csv into a TEXT table and then test typecasting on a sample set, followed by the full set.
And geo indexes are no joke. Using them has made 8hr long SQL queries take seconds.
by gabinator on 10/20/2023, 1:46:57 AM
Wait, you guys aren't copy and pasting every CSV by hand into an Excel spreadsheet?
by dima55 on 10/19/2023, 9:06:36 PM
Alternative very appropriate for some uses cases: `vnl-join` from the vnlog toolkit (https://github.com/dkogan/vnlog). Uses the `join` tool from coreutils (works well, has been around forever), and `vnlog` for nice column labelling
by brennaw1 on 10/19/2023, 7:37:33 PM
I'm pretty impressed by how Zed seems to handle the CSV overhead we typically see with standard SQL. That 'gradual slope' concept and the one-shot query without a ton of preprocessing? Pretty slick. Seeing the CSV parse transition to Zed lake queries resulting in that kind of speed-up is intriguing. Before jumping on board, though, I'd be curious to see how Zed holds up with even bigger datasets. The CIDR match and join ops are a nice touch, making it feel a tad SQL-like.
by zenincognito on 10/19/2023, 8:38:24 PM
Or, you can also use openrefine from Google.
Currently mangling a 4 GB file and working with api's that use existing data columns to provide output.. Its a great tool.
by holoduke on 10/20/2023, 12:31:34 AM
I would rather create a quick sqllite file. Create some tables with dbeaver and create a csv processor to write to the sqllite file. Max 1 hour.
by kelsey9876543 on 10/20/2023, 1:42:45 AM
if you need more tools and power, check out csvkit: https://csvkit.readthedocs.io/en/latest/tutorial/3_power_too...
by hermitcrab on 10/19/2023, 8:08:40 PM
You can join 2 CSV files in a GUI ETL tool like Easy Data Transform in 3 clicks:
-drag the 2 CSV files onto the canvas
-click 'Join'
-select the 2 columns to join
by thedougd on 10/20/2023, 11:23:23 AM
I needed to do this yesterday. Thankfully I recall a hacker news suggestion to use SQLite. No time at all to get both files imported into tables and successful join queries. I’m glad it was SQL as I needed a few basic transforms (case folding, trim, etc) and conditions.
by tobilg on 10/19/2023, 8:25:19 PM
I‘d rather just use DuckDB and write a few SQLs, and be done.
This can be done in the terminal as well.
by shortrounddev2 on 10/19/2023, 6:13:39 PM
would be cool to see a writeup converting these to equivalent powershell queries
by nathants on 10/19/2023, 8:34:58 PM
yes! non standard data wrangling, even if just for fun, is great way to gain a better standing of your workload and hardware.
tldr; [de]serialization is your bottleneck, after that it’s general data processing. both are wasting insane levels of cpu cycles. network and disk, when accessed linearly, are free.
i remember first looking into this when ec2 i3 came out, only more so since. lambda for burst cpu capacity when you can’t wait 30s for ec2 spot is interesting too.
https://nathants.com/posts/performant-batch-processing-with-...
by qtd6214 on 10/19/2023, 6:52:00 PM
Please don’t do this for actual work you share with other people. There’s a good reason why pandas exists.
>But before you can get there, as a prerequisite you need to enumerate all the field names and desired data types to load it into a specific structured format. Maybe there’s another way?
In sqlite, this is just:
.mode csv
.import data.csv table
>When .import is run, its treatment of the first input row depends upon whether the target table already exists. If it does not exist, the table is automatically created and the content of the first input row is used to set the name of all the columns in the table.[0]
[0] https://www.sqlite.org/cli.html#importing_files_as_csv_or_ot...